We have a service that is being monitored via JMX. The JVM heap usage is growing and even major collections are not able to remove the garbage. Inspecting the heap shows garbage consisting of RMI related references (mostly, if not all, related class loaders). The only way to alleviate the issue is to issue explicit gc call through JMX (that removes all accumulated garbage). Our gc related options are:
-XX:+UseParNewGC
-XX:+UseConcMarkSweepGC
-XX:+CMSParallelRemarkEnabled
-XX:SurvivorRatio=8
-XX:MaxTenuringThreshold=1
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
And we have not touched either of: DisableExplicitGC or sun.rmi.dgc.server.gcInterval
I believe the problem is supposed to addressed by the code in sun.misc.GC.Daemon:
public void run() {
for (;;) {
long l;
synchronized (lock) {
l = latencyTarget;
if (l == NO_TARGET) {
/* No latency target, so exit */
GC.daemon = null;
return;
}
long d = maxObjectInspectionAge();
if (d >= l) {
/* Do a full collection. There is a remote possibility
* that a full collection will occurr between the time
* we sample the inspection age and the time the GC
* actually starts, but this is sufficiently unlikely
* that it doesn't seem worth the more expensive JVM
* interface that would be required.
*/
System.gc();
d = 0;
}
/* Wait for the latency period to expire,
* or for notification that the period has changed
*/
try {
lock.wait(l - d);
} catch (InterruptedException x) {
continue;
}
}
}
}
For some reason the above System.gc is not being invoked (that has been verified by looking at gc logs). Anyone has a suggestion as to how to address the issue?
Related
Does throwing OutOfMemoryError trigger the heap dump, or does memory actually need to be exhausted?
In other words, will a heap dump be produced if I:
throw new java.lang.OutOfMemoryError();
and have set
-XX:+HeapDumpOnOutOfMemoryError
Is this universally true for all JVMs, or is this likely to be vendor-specific?
Why: I want to simulate OOME for testing purposes, and would prefer to have a one-line way of doing this. Just throwing the Error seems logical.
Because the documentation doesn't say so and it may or may not be vendor specific, I would just create a large object to force an OOME.
I used this simple Runnable to spawn a Thread causing an OOME when I needed to:
private static class OOMRunnable implements Runnable {
private static final int ALLOCATE_STEP_SIZE = 1_000_000;
#Override
public void run() {
long bytesUsed = 0L;
List<long[]> eatingMemory = new ArrayList<>();
while (true) {
eatingMemory.add(new long[ALLOCATE_STEP_SIZE]);
bytesUsed += Long.BYTES * ALLOCATE_STEP_SIZE;
System.out.printf("%d MB allocated%n", bytesUsed / 1_000_000);
}
}
}
Recently My operation colleague report production environment have many full gc, and influence app response time. And he supply an image
he especially said StackTraceElement have 85M, and suggests not have these code , e.g.
e.printStackTrace();
Now I want to simulate this situation in my local, and I write a test code like below
public class FullGCByLogTest {
private static final Logger log = Logger.getLogger(FullGCByLogTest.class);
public static final byte[] _1M = new byte[1 * 1024 * 1024]; //placeholder purpose
public static void main(String[] args) throws InterruptedException {
int nThreads = 1000; // concurrent count
ExecutorService pool = Executors.newFixedThreadPool(nThreads);
while (true) {
final CountDownLatch latch = new CountDownLatch(nThreads);
for (int i = 0; i < nThreads; i++) {
pool.submit(new Runnable() {
#Override
public void run() {
latch.countDown();
try {
latch.await(); // waiting for execute below code concurrently
} catch (InterruptedException e1) {
}
try {
int i = 1 / 0;
System.out.println(i);
} catch (Exception e) {
e.printStackTrace();
// log.error(e.getMessage(), e);
}
}
});
}
try {
Thread.sleep(100); // interval 1s every concurrent calling
} catch (InterruptedException e) {
}
}
}
}
and I run this class with these vm args
-Xmx4m -Xms4m -XX:NewSize=1m -XX:MaxNewSize=1m -XX:+PrintGCDetails
then in jvisualvm VisualGC I found old gen is 7 M, but I set max heap is 4m.
in addition in heapdump I did not find StackTraceElement. So how could I emulate this problem successfully?
The StackTraceElement objects are actually created when an exception object is instantiated, and they will be eligible for garbage collection as soon as the exception object is unreachable.
I suspect that the real cause for your (apparent) storage leak is that something in your code is saving lots of exception objects.
Calling printStackTrace() does not leak objects. Your colleague has misdiagnosed the problem. However calling printStackTrace() all over the place is ugly ... and if it happens frequently, that will lead to performance issues.
Your simulation and the results are a red herring, but the probable reason that the heap is bigger than you asked for is that the JVM has "rounded up" to a larger heap size. (4Mb is a miniscule heap size, and impractical for most Java programs.)
So how could I emulate this problem successfully?
Emulation is highly unlikely to tell you anything useful. You need to get hold of a heap dump from the production system and analyze that.
I have a couple of applications that run in specified intervals. To monitor OutOfMemoryError i've decided to enable HeapDumpOnOutOfMemoryError, and before doing this i decided to do some research. Some of applications have maximum heap size of 2GB, so generating multiple heap dumps in rapid succession could eat up all disk space.
I've written a small script to check how it will work.
import java.util.LinkedList;
import java.util.List;
public class Test implements Runnable{
public static void main(String[] args) throws Exception {
new Thread(new Test()).start();
}
public void run() {
while (true) {
try{
List<Object> list = new LinkedList<Object>();
while (true){
list.add(new Object());
}
}
catch (Throwable e){
System.out.println(e);
}
try {
Thread.sleep(1000);
}
catch (InterruptedException ignored) {
}
}
}
}
And here is the result
$ java -XX:+HeapDumpOnOutOfMemoryError -Xmx2M Test
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid25711.hprof ...
Heap dump file created [14694890 bytes in 0,101 secs]
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
It works as i would want it to, but i would like to know why.
Looking at openjdk6 source code i've found the following
void report_java_out_of_memory(const char* message) {
static jint out_of_memory_reported = 0;
// A number of threads may attempt to report OutOfMemoryError at around the
// same time. To avoid dumping the heap or executing the data collection
// commands multiple times we just do it once when the first threads reports
// the error.
if (Atomic::cmpxchg(1, &out_of_memory_reported, 0) == 0) {
// create heap dump before OnOutOfMemoryError commands are executed
if (HeapDumpOnOutOfMemoryError) {
tty->print_cr("java.lang.OutOfMemoryError: %s", message);
HeapDumper::dump_heap_from_oome();
}
if (OnOutOfMemoryError && OnOutOfMemoryError[0]) {
VMError err(message);
err.report_java_out_of_memory();
}
}
}
How does the first if statement work?
EDIT: it seems that heapdump should be created every time message is printed, but it does not happen. Why is that so?
The if statement contains a compare-and-exchange atomic operation which will return 0 if and only if the exchange was performed by the running thread. Compare-and-exchange (also known as compare-and-swap) works the following way:
Supply a value of which you think a variable contains (0 in your case, the variable is out_of_memory_reported)
Supply a value for which you would like to exchange the value (1 in your case)
If the value is the one you supplied, it is exchanged for the replacement value atomically (no other thread may change the value after it has been compared against your estimation) and 0 is returned
Otherwise, nothing happens and a value different from 0 is returned to indicate the failure
I build a sample program demonstrate memory leak in java.
public class MemoryLeakTest {
static int depth = 0;
int number=0;
MemoryLeakTest mobj;
MemoryLeakTest(){
number = depth;
if(depth < 6500){
depth++;
mobj = new MemoryLeakTest();
}
}
protected void finalize(){
System.out.println(number + " released.");
}
public static void main(String[] args) {
try{
System.out.println(ManagementFactory.getMemoryMXBean().getHeapMemoryUsage());
System.out.println("Free Memory in starting "+ Runtime.getRuntime().freeMemory());
MemoryLeakTest testObj = new MemoryLeakTest();
System.out.println("Free Memory in end "+ Runtime.getRuntime().freeMemory());
System.out.println(ManagementFactory.getMemoryMXBean().getHeapMemoryUsage());
}
catch(Exception exp){}
finally{
System.out.println("Free Memory"+ Runtime.getRuntime().freeMemory());
System.out.println(ManagementFactory.getMemoryMXBean().getHeapMemoryUsage());
}
}
}
I run it by changing value of N in if(depth < N). An here is the result;
when depth is 1000
init = 16777216(16384K) used = 288808(282K) committed = 16252928(15872K) max = 259522560(253440K)
Free Memory in starting 15964120
Free Memory in end 15964120
init = 16777216(16384K) used = 288808(282K) committed = 16252928(15872K) max = 259522560(253440K)
Free Memory 15964120
init = 16777216(16384K) used = 288808(282K) committed = 16252928(15872K) max = 259522560(253440K)
when depth is 1500
init = 16777216(16384K) used = 288808(282K) committed = 16252928(15872K) max = 259522560(253440K)
Free Memory in starting 15964120
Free Memory in end 15964120
init = 16777216(16384K) used = 288808(282K) committed = 16252928(15872K) max = 259522560(253440K)
Free Memory 15873528
init = 16777216(16384K) used = 379400(370K) committed = 16252928(15872K) max = 259522560(253440K)
when depth is 6000
init = 16777216(16384K) used = 288808(282K) committed = 16252928(15872K) max = 259522560(253440K)
Free Memory in starting 15964120
Free Memory in end 15692784
init = 16777216(16384K) used = 560144(547K) committed = 16252928(15872K) max = 259522560(253440K)
Free Memory 15692784
init = 16777216(16384K) used = 560144(547K) committed = 16252928(15872K) max = 259522560(253440K)
when depth is 6500 (Exception in thread "main" java.lang.StackOverflowError)
init = 16777216(16384K) used = 288808(282K) committed = 16252928(15872K) max = 259522560(253440K)
Free Memory in starting 15964120
Free Memory in end 15676656
init = 16777216(16384K) used = 576272(562K) committed = 16252928(15872K) max = 259522560(253440K)
My questions are;
It is not calling finalize(). Is it memory leak?
There is not change in free memory up to N=1000. But when N=1500 there is 2 different
values for used memory at the end of the program ie 282K and 370K.
Why does it so?
When N=6500, JVM generates error. So why last 2
statements of try{} are executed.
Your program won't "leak" as Java will take care of anything "dangling" out there. That's the benefit of a garbage-collected language.
But what you do have is a StackOverFlow error. Basically, the stack (which is the chain of functions you're in, and how deep that is) is much MUCH smaller than the heap. The heap is "more or less" the size of main memory. Each thread's stack is much much smaller. Basically you're reaching that limit by doing your "Depth" thing.
If you want to test "leaks" (or the idea that you won't have any eventually) try something more like this:
public class MemoryLeakTest {
int number=0;
public MemoryLeakTest mobj;
MemoryLeakTest(int num){
number = num;
}
protected void finalize(){
System.out.println(number + " released.");
}
public static void main(String[] args) {
try{
System.out.println(ManagementFactory.getMemoryMXBean().getHeapMemoryUsage());
System.out.println("Free Memory in starting "+ Runtime.getRuntime().freeMemory());
MemoryLeakTest first = new MemoryLeakTest(0); // Keep a reference to one of them
MemoryLeakTest current = first;
for(int i = 1; i < Int.Parse(args[0]); i++) // forgive me, Java's been a while. This may be C#. But parse the first arg for your number of objects
{
current.mobj = new MemoryLeakTest(i);
current = current.mobj;
}
System.out.println("Free Memory in end "+ Runtime.getRuntime().freeMemory());
System.out.println(ManagementFactory.getMemoryMXBean().getHeapMemoryUsage());
}
catch(Exception exp){}
finally{
System.out.println("Free Memory"+ Runtime.getRuntime().freeMemory());
System.out.println(ManagementFactory.getMemoryMXBean().getHeapMemoryUsage());
}
}
}
That will give you a "chain" of objects all in memory until first goes out of scope.
It is not calling finalize(). Is it memory leak?
No there is no memory leak you always keep accessible reference to your testObj object
and that's the reason why finalizewill never be called in your application.
All you do in your application is to create a huge object graph.
Here you can find an explanation how to create a real memory leak in java.
It is not calling finalize(). Is it memory leak?
Finalize is not guaranteed to be called, it is called when the garbage collector collects the given object but the objects are not guaranteed to be collected before the execution ends.
There is not change in free memory up to N=1000. But when N=1500 there is 2 different values >for used memory at the end of the program ie 282K and 370K. Why does it so?
I think it depends on the execution of the garbage collector and the moments that it gets executed.
When N=6500, JVM generates error. So why last 2 statements of try{} are executed.
This is because you're not catching the exception since StackOverflowError inherits from Error that is not part of the Exception inheritance branch but rather is a brother of Exception, anyway you have no code in the catch, the last two methods of your try are not being executed because the exception has been thrown.
In summary you didn't produce a memory leak, memory leaks happen in java when you have references to objects that are reachable (directly or indirectly) from the execution flow at some point, for example you store objects in a collection that you can reach, or singletons.
The garbage collector itself is smart enough to free object graphs that are not reachable from the program at all.
Hope I could make it clear.
Already most of the answers explained difference between StackOverflowError and memory leak.
There is not change in free memory up to N=1000. But when N=1500 there is 2 different values for used memory at the end of the program ie 282K and 370K. Why does it so?
it is because every time you create new object and previous obj become unreachable(no references, overriding reference) and hence can be freed if required.
So far simplest example to make jvm run out of memory (not leak).
public class PrintSeries {
private static String COMMA = ",";
private StringBuilder buildStream;// = new StringBuilder();
public static void main(String[] args) {
System.out.println(new PrintSeries().convert(10));
System.out.println(new PrintSeries().convert(1000000000));
}
private String convert(int n) {
buildStream = new StringBuilder();
while (n > 1) {
buildStream.append(n-- + COMMA);
}
buildStream.append(n);
return buildStream.toString();
}
}
output
10,9,8,7,6,5,4,3,2,1
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2882)
at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:390)
at java.lang.StringBuilder.append(StringBuilder.java:119)
at com.cctest.algotest.string.PrintSeries.convert(PrintSeries.java:17)
at com.cctest.algotest.string.PrintSeries.main(PrintSeries.java:10)
This is not evidence of a memory leak. The program is throwing StackOverflowError not OutOfMemoryError. In fact, that is going on is that the constructor is calling itself recursively, and when the number of recursive calls exceeds some large number (between 6,000 and 6,500), you run out of stack space.
It is not calling finalize(). Is it memory leak?
No. The finalize() method is most likely not being called because the GC has not run. And it has not run because you haven't filled the heap. And even if that is not the real explanation, there is no guarantee that the finalize() method will ever be called. The only absolute guarantee you have is that finalize() will be called before the object's memory is reused by the JVM.
There is not change in free memory up to N=1000. But when N=1500 there is 2 different values for used memory at the end of the program ie 282K and 370K. Why does it so?
I'm not sure why that happens, but I don't think it indicates anything significant. (There are all sorts of things that happen under the hood in a JVM that can be sources of non-determinacy in things like memory allocation and usage patterns.)
When N=6500, JVM generates error. So why last 2 statements of try{} are executed.
The statements in the finally is always executed, unless the JVM terminates abruptly. When the StackOverflowError is thrown, it propagates like any other exception, and can be caught and recovered from (in some cases).
I'm going to use a SoftReference-based cache (a pretty simple thing by itself). However, I've came across a problem when writing a test for it.
The objective of the test is to check if the cache does request the previously cached object from the server again after the memory cleanup occurs.
Here I find the problem how to make system to release soft referenced objects. Calling System.gc() is not enough because soft references will not be released until the memory is low. I'm running this unit test on the PC so the memory budget for the VM could be pretty large.
================== Added later ==============================
Thank you all who took care to answer!
After considering all pro's and contra's I've decided to go the brute force way as advised by nanda and jarnbjo. It appeared, however, that JVM is not that dumb - it won't even attempt garbage collecting if you ask for a block which alone is bigger than VM's memory budget. So I've modified the code like this:
/* Force releasing SoftReferences */
try {
final List<long[]> memhog = new LinkedList<long[]>();
while(true) {
memhog.add(new long[102400]);
}
}
catch(final OutOfMemoryError e) {
/* At this point all SoftReferences have been released - GUARANTEED. */
}
/* continue the test here */
This piece of code forces the JVM to flush all SoftReferences. And it's very fast to do.
It's working better than the Integer.MAX_VALUE approach, since here the JVM really tries to allocate that much memory.
try {
Object[] ignored = new Object[(int) Runtime.getRuntime().maxMemory()];
} catch (OutOfMemoryError e) {
// Ignore
}
I now use this bit of code everywhere I need to unit test code using SoftReferences.
Update: This approach will indeed work only with less than 2G of max memory.
Also, one need to be very careful with SoftReferences. It's so easy to keep a hard reference by mistake that will negate the effect of SoftReferences.
Here is a simple test that shows it working every time on OSX. Would be interested in knowing if JVM's behavior is the same on Linux and Windows.
for (int i = 0; i < 1000; i++) {
SoftReference<Object> softReference = new SoftReferencelt<Object>(new Object());
if (null == softReference.get()) {
throw new IllegalStateException("Reference should NOT be null");
}
try {
Object[] ignored = new Object[(int) Runtime.getRuntime().maxMemory()];
} catch (OutOfMemoryError e) {
// Ignore
}
if (null != softReference.get()) {
throw new IllegalStateException("Reference should be null");
}
System.out.println("It worked!");
}
An improvement that will work for more than 2G max memory. It loops until an OutOfMemory error occurs.
#Test
public void shouldNotHoldReferencesToObject() {
final SoftReference<T> reference = new SoftReference<T>( ... );
// Sanity check
assertThat(reference.get(), not(equalTo(null)));
// Force an OoM
try {
final ArrayList<Object[]> allocations = new ArrayList<Object[]>();
int size;
while( (size = Math.min(Math.abs((int)Runtime.getRuntime().freeMemory()),Integer.MAX_VALUE))>0 )
allocations.add( new Object[size] );
} catch( OutOfMemoryError e ) {
// great!
}
// Verify object has been garbage collected
assertThat(reference.get(), equalTo(null));
}
Set the parameter -Xmx to a very
small value.
Prepare your soft
reference
Create as many object as
possible. Ask for the object everytime until it asked the object from server again.
This is my small test. Modify as your need.
#Test
public void testSoftReference() throws Exception {
Set<Object[]> s = new HashSet<Object[]>();
SoftReference<Object> sr = new SoftReference<Object>(new Object());
int i = 0;
while (true) {
try {
s.add(new Object[1000]);
} catch (OutOfMemoryError e) {
// ignore
}
if (sr.get() == null) {
System.out.println("Soft reference is cleared. Success!");
break;
}
i++;
System.out.println("Soft reference is not yet cleared. Iteration " + i);
}
}
You could explicitly set the soft reference to null in your test, and as such simulate that the soft reference has been released.
This avoids any complicated test setup that is memory and garbage collection dependend.
Instead of a long running loop (as suggested by nanda), it's probably faster and easier to simply create a huge primitive array to allocate more memory than available to the VM, then catch and ignore the OutOfMemoryError:
try {
long[] foo = new long[Integer.MAX_VALUE];
}
catch(OutOfMemoryError e) {
// ignore
}
This will clear all weak and soft references, unless your VM has more than 16GB heap available.