An OutOfMemoryError occurs when the heap does not have enough memory to create new objects. If the heap does not have enough memory, where is the OutOfMemoryError object created. I am trying to understand this, please advise.
Of course, this is an implementation-dependent behavior. HotSpot has some heap memory inaccessible for ordinary allocations, the JVM can use to construct an OutOfMemoryError in. However, since Java allows an arbitrary number of threads, an arbitrary number of threads may hit the wall at the same time, so there is no guaranty that the memory is enough for constructing a distinct OutOfMemoryError instance for each of them.
Therefore, an emergency OutOfMemoryError instance is created at the JVM startup persisting throughout the entire session, to ensure, that the error can be thrown even if there is really no memory left. Since the instance will be shared for all threads encountering the error while there’s really no memory left, you will recognize this extraneous condition by the fact that this error will have no stack trace then.
The following program
ConcurrentHashMap<OutOfMemoryError,Integer> instances = new ConcurrentHashMap<>();
ExecutorService executor = Executors.newCachedThreadPool();
executor.invokeAll(Collections.nCopies(1000, () -> {
ArrayList<Object> list = new ArrayList<>();
for(;;) try {
list.add(new int[10_000_000]);
} catch(OutOfMemoryError err) {
instances.merge(err, 1, Integer::sum);
return err;
}
}));
executor.shutdown();
System.out.println(instances.size()+" distinct errors created");
instances.forEach((err,count) -> {
StackTraceElement[] trace = err.getStackTrace();
System.out.println(err.getClass().getName()+"#"+Integer.toHexString(err.hashCode())
+(trace!=null&&trace.length!=0? " has": " has no")+" stacktrace, used "+count+'x');
});
running under jdk1.8.0_65 with -Xmx100M and waiting half a minute gave me
5 distinct errors created
java.lang.OutOfMemoryError#c447d22 has no stacktrace, used 996x
java.lang.OutOfMemoryError#fe0b0b7 has stacktrace, used 1x
java.lang.OutOfMemoryError#1e264651 has stacktrace, used 1x
java.lang.OutOfMemoryError#56eccd20 has stacktrace, used 1x
java.lang.OutOfMemoryError#70ab58d7 has stacktrace, used 1x
showing that the reserved memory could serve the construction of four distinct OutOfMemoryError instances (including the memory needed to record their stack traces) while all other threads had to fall back to the reserved shared instance.
Of course, numbers may vary between different environments.
It's generated natively by the JVM, which isn't limited by -Xmx or other parameters. The heap reserved for your program is exhausted, not the memory available for the JVM.
Related
This is a sample code of my main application problem. When I generate the data it continues to take the RAM (by the way its OK). But when I stop the process it still remain in the RAM (I can see it in the Task Manager). I tried to use the System.gc() but it also didn't work. At some point the program got stuck, because of it taking up more memory. Hope somebody can help me.
public static ArrayList<String> my = new ArrayList<>();
public static int val = 0;
// Code for Start Button
try {
new Thread(new Runnable() {
#Override
public void run() {
String ss = "";
for (int i = 0; i < 10000; i++) {
ss += "ABC";
}
while (true) {
if (val == 0) {
for (int i = 0; i < 30; i++) {
my.add(ss + new SimpleDateFormat("yyyyMMddHHmmssSSS"));
}
try {
Thread.sleep(50);
} catch (InterruptedException ex) {
}
} else {
Thread.yield();
break;
}
}
}
}).start();
} catch (Exception e) {
e.printStackTrace();
}
// Code for Stop Button
val = 1;
my.clear();
my = null;
System.gc();
Runtime.getRuntime().freeMemory();
Garbage collection depends on the various factor like which collector you're using, machine's physical memory as well as the JVM version you are using. Since you're not mentioned so much here about them, bit hard to predict which could be the cause for this. I assume you're using Java 8 since that's the more popular version nowadays.
Since Java 8, There's a change in JVM memory model. Which is, now there is no Permanent Generation space. This's the space where String Pool located (I'm going with String hence you're using a String concatenation in loops). Refer this Java (JVM) Memory Model – Memory Management in Java document. Permanent Generation is where the class/static references live since you declare them as well.
Instead of Permanent Generation, Since Java 8, there's a new memory location called Metaspace which lives in the Main memory outside of JVM.
Also, when you concatenate String objects like this, it won't modify the existing object as String is immutable type. Instead, it creates new String objects with the new value and put them into the Metaspace. This might be the reason you're seeing a memory usage increment.
Even though Metaspace is located inside the main memory/physical memory and it can dynamically expand, still has the physical memory limitation. That's why I told earlier machine's physical memory as a dependency factor.
When we come to garbage collection, you haven't mention any GC config. So I assume you are using Parallel GC which is the default collector of Java 8 (You can find more about the GCs from same link provided above). I guess the Parallel GC's performance is adequate for this task. Therefore invoking System.gc() whould be enough without any JVM flag.
But, as you mentioned that System.gc() doesn't clean up the memory could occurs hence you're using a separate thread to concatenate these Strings.
Usually, Strings that created using String literal (String s = "abc") would not become garbage eligible. This because there is a implicit reference to the String object in the code of every method that uses the literal (Check this answer When will a string be garbage collected in java). Thus, you have to loose those implicit references by ending the execution of the function.
Since you're using a new Thread to do this concatenation and I can't find any place where you interrupt the thread and you're invoking the Thread.yield()(Thread.yield) to inform the thread scheduler to seize the usage of the CPU for this particular thread and mark the thread is willing to be scheduled as soon as possible again, make pretty much clear this Thread object still lives and refers those String objects not making them garbage eligible. This maybe the reason System.gc() invocation is not working.
As a solution, try to interrupt the thread instead of yield.
Update 1:
Before Java 7, String Pool was located in PermGen, which is not eligible for garbage collection. PermGen has a fixed size and not capable to expand at runtime. If PermGen has not enough space, it gives java.lang.OutOfMemoryError: PermGen error. As a temporary remediation we can increase the PermGen size using -XX:MaxPermSize=512m flag.
But remember this is only works on JVMs before Java 8 and in Java 7, this doesn't make any different in the sense of increasing String Pool size availability hence Java 7 onwards, String Pool has moved to Heap space.
I just found out that there are some libraries to compute the shallow size of a java object, so I thought I can also write this in a very simple way. Here is what I tried.
Start the program with some Xmx say A
Create objects of type whose size you want to calculate (say type T) and store them in a list so that GC shouldn't clean them up.
When we hit OOM, let the code handle it and empty the list.
Now check the number of the objects of type T we allocated. Let this be n
Do a binary search to find out the delta inorder to successfully allocate n+1 objects.
Here is the code, I tried out
import java.util.ArrayList;
public class test {
public static void main(String[] a) {
ArrayList<Integer> l = new ArrayList<>();
int i=0;
try {
while(true) {
l.add(new Integer(1));
i++;
}
} catch(Throwable e) {
} finally {
l.clear();
System.out.println(i + "");
}
}
}
But I noticed that the number of objects allocated in each run for a same Xmx was varying. Why is this? Is there anything inside JVM randomized?
But I noticed that the number of objects allocated in each run for a same Xmx was varying. Why is this?
Some events in the JVM are non-deterministic, and this can effect garbage collector behavior.
But there could be factors in play that are resulting in variable numbers of (your) objects being created before you fill up the heap. These include:
Not all of the objects in the heap will be the ArrayList and Integer objects that you are explicitly creating. There will be Object[] objects that get created when you resize the ArrayList, various objects generated by your println calls ... and other things that happen under the hood.
Heap resizing behavior. The heap is not immediately sized to -Xmx size. The JVM typically starts with a smaller heap size and expands it on demand. By the time you get an OOME, the JVM has most likely expanded the heap to the max permitted, but the sequence of expansions is potentially sensitive to ... various factors including some that may be non-deterministic.
Heap generations. A typical Java GC uses an old space and a new space. The old space contains long-lived objects. New objects are allocated into the new space ... unless they are very large. The actual distribution of the objects can affect when the GC runs occur, and when JVM is going to decide that the heap is full.
JIT compilation. At certain points in the execution of your application, the JVM will (typically) decide to JIT compile your code. When this happens, extra objects get allocated.
Is there anything inside JVM randomized?
It is unlikely to be explicit randomization affecting this benchmark. There is sufficient non-determinism at various levels (i.e. in the hardware, the OS and the JVM) to explain the inconsistent results you are seeing.
In short: I wouldn't expect your benchmark to give consistent results for the number of objects that can be created.
I'm trying to generate classes and load them at run time.
I'm using a ClassLoader object to load the classes. Since I don't want to run out of PermGen memory, from time to time I un-reference the class loader and create a new one to load the new classes to be used. This seems to work fine and I don't get a PermGen out of memory.
The problem is that when I do that, after a while I get the following error:
java.lang.OutOfMemoryError: GC overhead limit exceeded
So my question is, when should I un-reference the class loader to avoid both errors?: Should I monitor in my code the PermGen usage so that I un-reference the class loader and call System.gc() when the PermGen usage is close to the limit?
Or should I follow a different approach? Thanks
There is no single correct answer to this.
On the one hand, if unlinking the classloader is solving your permgen leakage problems, then you should continue to do that.
On the other hand, a "GC overhead limit exceeded" error means that your application is spending too much time garbage collection. In most circumstances, this means that your heap is too full. But that can mean one of two things:
The heap is too small for your application's requirements.
Your application has a memory leak.
You could assume that the problem is the former one and just increase the heap size. But if the real problem is the latter one, then increasing the heap size is just postponing the inevitable ... and the correct thing to do would be to find and fix the memory leak.
Don't call System.gc(). It won't help.
Are you loading the same class multiple times?
Because you should cache the loaded class.
If not, how many classes are you loading?
If they are plenty you may have to fix a limit of loaded classes (this number can be either based on heap size or a number based on how much memory does it take to have a loaded class) and discard the least used when loading the next one.
I had somewhat similar situation with class unloading.
I'm using several class loaders to simulate multiple JVM inside of JUnit test (this is usually used to work with Oracle Coherence cluster, but I was also successfully used this technique to start multi node HBase/Hadoop cluster inside of JVM).
For various reasons tests may require restart of such "virtual" JVM, which means forfeiting old ClassLoader and creating new one.
Sometimes JVM delays class unloading event if you for Full GC, which leads to various problems later.
One technique I found usefully for forcing JVM to collect PermSpace is following.
public static void forcePermSpaceGC(double factor) {
if (PERM_SPACE_MBEAN == null) {
// probably not a HotSpot JVM
return;
}
else {
double f = ((double)getPermSpaceUsage()) / getPermSpaceLimit();
if (f > factor) {
List<String> bloat = new ArrayList<String>();
int spree = 0;
int n = 0;
while(spree < 5) {
try {
byte[] b = new byte[1 << 20];
Arrays.fill(b, (byte)('A' + ++n));
bloat.add(new String(b).intern());
spree = 0;
}
catch(OutOfMemoryError e) {
++spree;
System.gc();
}
}
return;
}
}
}
Full sourcecode
I'm filling PermSpace with String using intern() until JVM would collect them.
But
I'm using that technique for testing
Various combination of hardware / JVM version may require different threshold, so it is often quicker to restart whole JVM instead of forcing it to properly collect all garbage
I have a simple example. The example loads an ArrayList<Integer> from a file f containing 10000000 random integers.
doLog("Test 2");
{
FileInputStream fis = new FileInputStream(f);
ObjectInputStream ois = new ObjectInputStream(fis);
List<Integer> l = (List<Integer>) ois.readObject();
ois.close();
fis.close();
doLog("Test 2.1");
//l = null;
doLog("Test 2.2");
}
doLog("Test 2.3");
System.gc();
doLog("Test 2.4");
When I have l = null, I get this log:
Test 2 Used Mem = 492 KB Total Mem = 123 MB
Test 2.1 Used Mem = 44 MB Total Mem = 123 MB
Test 2.2 Used Mem = 44 MB Total Mem = 123 MB
Test 2.3 Used Mem = 44 MB Total Mem = 123 MB
Test 2.4 Used Mem = 493 KB Total Mem = 123 MB
But when I remove it, I get this log instead.
Test 2 Used Mem = 492 KB Total Mem = 123 MB
Test 2.1 Used Mem = 44 MB Total Mem = 123 MB
Test 2.2 Used Mem = 44 MB Total Mem = 123 MB
Test 2.3 Used Mem = 44 MB Total Mem = 123 MB
Test 2.4 Used Mem = 44 MB Total Mem = 123 MB
Used Memory is calculated by: runTime.totalMemory() - runTime.freeMemory()
Question: In case where l = null; is present, is there a memory leak?
l is inaccessible, so why can't it be freed?
There is no memory leak in the above code.
As soon as you leave the code block enclosed in {}, the variable l falls out of scope, and the List is a candidate for garbage collection, regardless of if you set it to null first or not.
However, after the code block and until the return of the method, the List is in a state called invisible. While this is true, the JVM is unlikely to automatically null out the reference and collect the List's memory. Therefore, explicitly setting l = null can help the JVM collect the memory before you do your memory calculations. Otherwise, it will happen automatically when the method returns.
You will probably get different results for different runs of your code, since you never know exactly when the garbage collector will run. You can suggest that you think it should run using System.gc() (and it might even collect the invisible List even without setting l = null), but there are no promises. It is stated in the javadoc for System.gc():
Calling the gc method suggests that the Java Virtual Machine expend
effort toward recycling unused objects in order to make the memory
they currently occupy available for quick reuse. When control returns
from the method call, the Java Virtual Machine has made a best effort
to reclaim space from all discarded objects.
I think there's a bit of semantics issue here. "Memory leak" generally means having some data stored in memory by a program (piece of software, etc) and getting that program into a state where it can no longer access that in-memory data to clean it up, thus getting into a situation where that memory cannot be claimed for future use. This, as far as I can tell, is the general definition.
A real-world use of the term "memory leak" is usually in reference to programming languages where it's up to the developer to manually allocate memory for the data that he intends to place on the heap. Such languages are C, C++, Objective-C (*), etc. For example the "malloc" command or the "new" operator both allocate memory for an instance of a class that will be placed in the heap memory space. In such languages, a pointer needs to be kept to those thusly allocated instances, if we later-on want to clean up the memory used by them (when they're no longer needed). Continuing on the above example, a pointer referencing an instance that has been created on the heap using "new" can later on be "removed" from memory by using the "delete" command and passing it the pointer as parameter.
Thus, for such languages, a memory leak usually means having data placed on the heap and subsequentlly either:
arriving into a state where there's no longer a pointer to that data
or
forgetting/ignoring to manually "de-allocate" that on-the-heap data (via it's pointer)
Now, in the context of such a definition of "memory leak" this can pretty much never happend with Java. Technically, in Java it's the Garbage Collector's task to decide when heap-allocated instances are no longer referenced or fall out of scope and clean them up. There's no such equivalent of the C++ "delete" command in Java that would even allow the developer to manually "de-allocate" instances/data from the heap. Even making all the pointers of an instance null will not immediatelly free up that instance's memory, but instead it will only make it "garbage collectable" leaving it to the Garbage Collector thread(s) to clean it up when it makes its sweeps.
Now, one other thing that can happen in Java is to never let go of pointers to certain instances, even though they will no longer be needed after a given point. Or, to give certain instance a scope that's too big for what they are used. This way, they will hang around in memory longer than needed (or forever, where forever means until the JDK process is killed) and thus not have them collected by the Garbage Collector even though from a functional stand-point they should be cleaned up. This can lead to behaviour similar to a "memory leak" in the broader sense where "memory leak" simply stands for "having stuff in memory when it's no longer needed and having no way to clean it up".
Now, as you can see, "memory leak" is somewhat vague, but from what I can see, your example doesn't contain a memory leak (even the version where you don't make l=null). All your variables are in a tight scope as delimited by the accolade block, they are used inside that block and will fall out of scope when the block ends, thus they'll be Garbage Collected "properly" (from the functional stand-point of your program). As #Keppil states: making the pointer null will give the GC a better hint as to when to clean up it's corresponding instance, but even if you never make it null, your code will not (un-necessarely) hang on to instances, so no memory leak there.
A typical example of Java memory leak is when having code deployed into a Java EE application server, such that it will spawn threads outside the control of said application server (imaging a servlet that starts a Quartz job). If the application is deployed and undeployed multiple times, it's possible that some of the threads will not be killed at undeploy time, but also (re) started at deploy time, thus leaving them and any instances they might have created hang uselessly in memory.
(*) The later versions of Objective-C also give the possibility to have heap memory managed automatically, in a fashion similar to Javas Garbage Collection mechanism.
The real answer is that unless the code is JIT'd all local variables are 'reachable' within the method body.
Morealso, curly brackets do absolutely nothing in the bytecode. They exist only in the source level - JVM is absolutely unaware of them. Setting l to null effectively frees the reference up off the stack, so it's GC'd for real. Happy stuff.
If you used another method instead of an inline block everything would have passed w/o any surprises.
If the code is JIT'd and the JVM compiler has built reaching-definitions (also this) most likely setting l=null would have no effect and memory be freed in either case.
Question: In case of removing l = null; (do not have this line of
code), is this a memory leak?
No, but it facilitates the gc in claiming the memory if you do this "pattern"
I am working on a Java application, whose architecture is something like Java-EE component as one end and C++ component as the other.
When I execute the app continiously I get java.lang.OutOfMemoryError in Java heap. I was told this is different from a Java memory leak. If so what is the difference between OutOfMemoryError and Java memory leak? And how can I analyse this with a Java profiler?
A memory leak in Java is when objects you aren't using cannot be garbage collected because you still have a reference to them somewhere.
An OutOfMemoryError is thrown when there is no memory left to allocate new objects. This can be caused by a memory leak, but can also happen if you're just trying to hold too much data in memory at once.
The JDK includes useful tools like jhat and visualVM that allow you to inspect the objects in memory and the references between them. Using these you can often find the objects that are causing the problem.
Example
Here is a particularly silly memory leak. The old objects are never used, but cannot be garbage collected. While it may seem ridiculous, you can easily create an equivalent leak by mistake in large projects.
public class Leaky
{
private static List<Object> neverRead = new ArrayList<Object>();
public static void main(String[] args)
{
while(true)
{
neverRead.add(new Object());
}
}
}
This one is not a memory leak, but will usually cause an OutOfMemoryError somewhere.
public class Allocaty
{
public static void main(String[] args)
{
long[] array = new long[Integer.MAX_VALUE];
long value = 1L;
for(int ii=Integer.MAX_VALUE; ii>=0; ii--)
{
array[ii] = value++;
}
String str = Arrays.toString(array);
System.out.printf("%d: %s", array.length, str);
}
}
When i discussed some told like this one is different from java memory leak.If so what is the difference between OutofMemory Error and Java Memory leak.
The two are closely related. OutOfMemoryError is an Error (not an exception, and therefore won't be caught by a catch(Exception e) block) that gets thrown when the JVM runs out of memory. A memory leak is a possible cause of the JVM running out of memory. And in your case as described, I'd say it is the probable cause.
(There are other possible causes as well as memory leaks. You may be trying to run the application on a problem that is too big for the configured heap size. Alternatively, your might have a bug that causes it to allocate (say) a ridiculously large array.)
Think of it like a bucket for holding water: a leak means it's losing water, but simply being "out of space" means you're trying to put too much into it! So a bucket can be out of space without any question of having a leak.