HeapDumpOnOutOfMemoryError works only once on periodical tasks - java

I have a couple of applications that run in specified intervals. To monitor OutOfMemoryError i've decided to enable HeapDumpOnOutOfMemoryError, and before doing this i decided to do some research. Some of applications have maximum heap size of 2GB, so generating multiple heap dumps in rapid succession could eat up all disk space.
I've written a small script to check how it will work.
import java.util.LinkedList;
import java.util.List;
public class Test implements Runnable{
public static void main(String[] args) throws Exception {
new Thread(new Test()).start();
}
public void run() {
while (true) {
try{
List<Object> list = new LinkedList<Object>();
while (true){
list.add(new Object());
}
}
catch (Throwable e){
System.out.println(e);
}
try {
Thread.sleep(1000);
}
catch (InterruptedException ignored) {
}
}
}
}
And here is the result
$ java -XX:+HeapDumpOnOutOfMemoryError -Xmx2M Test
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid25711.hprof ...
Heap dump file created [14694890 bytes in 0,101 secs]
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
It works as i would want it to, but i would like to know why.
Looking at openjdk6 source code i've found the following
void report_java_out_of_memory(const char* message) {
static jint out_of_memory_reported = 0;
// A number of threads may attempt to report OutOfMemoryError at around the
// same time. To avoid dumping the heap or executing the data collection
// commands multiple times we just do it once when the first threads reports
// the error.
if (Atomic::cmpxchg(1, &out_of_memory_reported, 0) == 0) {
// create heap dump before OnOutOfMemoryError commands are executed
if (HeapDumpOnOutOfMemoryError) {
tty->print_cr("java.lang.OutOfMemoryError: %s", message);
HeapDumper::dump_heap_from_oome();
}
if (OnOutOfMemoryError && OnOutOfMemoryError[0]) {
VMError err(message);
err.report_java_out_of_memory();
}
}
}
How does the first if statement work?
EDIT: it seems that heapdump should be created every time message is printed, but it does not happen. Why is that so?

The if statement contains a compare-and-exchange atomic operation which will return 0 if and only if the exchange was performed by the running thread. Compare-and-exchange (also known as compare-and-swap) works the following way:
Supply a value of which you think a variable contains (0 in your case, the variable is out_of_memory_reported)
Supply a value for which you would like to exchange the value (1 in your case)
If the value is the one you supplied, it is exchanged for the replacement value atomically (no other thread may change the value after it has been compared against your estimation) and 0 is returned
Otherwise, nothing happens and a value different from 0 is returned to indicate the failure

Related

how to emulate full gc by many StackTraceElement in heap

Recently My operation colleague report production environment have many full gc, and influence app response time. And he supply an image
he especially said StackTraceElement have 85M, and suggests not have these code , e.g.
e.printStackTrace();
Now I want to simulate this situation in my local, and I write a test code like below
public class FullGCByLogTest {
private static final Logger log = Logger.getLogger(FullGCByLogTest.class);
public static final byte[] _1M = new byte[1 * 1024 * 1024]; //placeholder purpose
public static void main(String[] args) throws InterruptedException {
int nThreads = 1000; // concurrent count
ExecutorService pool = Executors.newFixedThreadPool(nThreads);
while (true) {
final CountDownLatch latch = new CountDownLatch(nThreads);
for (int i = 0; i < nThreads; i++) {
pool.submit(new Runnable() {
#Override
public void run() {
latch.countDown();
try {
latch.await(); // waiting for execute below code concurrently
} catch (InterruptedException e1) {
}
try {
int i = 1 / 0;
System.out.println(i);
} catch (Exception e) {
e.printStackTrace();
// log.error(e.getMessage(), e);
}
}
});
}
try {
Thread.sleep(100); // interval 1s every concurrent calling
} catch (InterruptedException e) {
}
}
}
}
and I run this class with these vm args
-Xmx4m -Xms4m -XX:NewSize=1m -XX:MaxNewSize=1m -XX:+PrintGCDetails
then in jvisualvm VisualGC I found old gen is 7 M, but I set max heap is 4m.
in addition in heapdump I did not find StackTraceElement. So how could I emulate this problem successfully?
The StackTraceElement objects are actually created when an exception object is instantiated, and they will be eligible for garbage collection as soon as the exception object is unreachable.
I suspect that the real cause for your (apparent) storage leak is that something in your code is saving lots of exception objects.
Calling printStackTrace() does not leak objects. Your colleague has misdiagnosed the problem. However calling printStackTrace() all over the place is ugly ... and if it happens frequently, that will lead to performance issues.
Your simulation and the results are a red herring, but the probable reason that the heap is bigger than you asked for is that the JVM has "rounded up" to a larger heap size. (4Mb is a miniscule heap size, and impractical for most Java programs.)
So how could I emulate this problem successfully?
Emulation is highly unlikely to tell you anything useful. You need to get hold of a heap dump from the production system and analyze that.

String vs StringBuffer in Java

After a long research , I got to know that String is immutable .String Buffer is more efficient than String if the program involves many computations.
But my question is slightly different from these
I have a function to which I pass a string . The string is actually the text of an article (nearly 3000-5000 charcs) .The function is implemented in threads. I mean to say , there is multiple call of function with different String text each time ..The later stage computations in the functions are too vast . Now when I run my code for a large number of threads, I am getting an error saying : GC Overhead Limit Exceeded . .
Now that I cant reduce the computations in the later stage of functions , my question is will it really help if I change the text type from String to String buffer? Also ,I don’t do any concatenation operation on the text string .
I have posted a small snipet of my code :
public static List<Thread> thread_starter(List<Thread> threads,String filename,ArrayList<String> prop,Logger L,Logger L1,int seq_no)
{ String text="";
if(prop.get(7).matches("txt"))
text=read_contents.read_from_txt(filename,L,L1);
else if(prop.get(7).matches("xml"))
text=read_contents.read_from_xml(filename,L,L1);
else if(prop.get(7).matches("html"))
text=read_contents.read_from_html(filename,L,L1);
else
{
System.out.println("not a valid config");
L1.info("Error : config file not properly defined for i/p file type");
}
/*TODO */
//System.out.println(text);
/*TODO CHANGES TO BE DONE HERE */
if(text.length()>0)
{
Runnable task = new MyRunnable(text,filename,prop,filename,L,L1,seq_no);
Thread worker = new Thread(task);
worker.start();
// Remember the thread for later usage
threads.add(worker);
}
else
{
main_entry_class.file_mover(filename, false);
}
return threads;
}
And i'm calling the above function repeatedly using the following code :
List<Thread> threads = new ArrayList<Thread>();
thread_count=10;
int file_pointer=0;// INTEGER POINTER VARIABLE
do
{
if(file.size()<=file_pointer)
break;
else
{ String file_name=file.get(file_pointer);
threads=thread_starter(threads,file_name,prop,L,L1,seq_no);
file_pointer++;
seq_no++;
}
}while(check_status(threads,thread_count)==true);
And the check status function :
public static boolean check_status(List<Thread> threads,int thread_count)
{
int running = 0;
boolean flag=false;
do {
running = 0;
for (Thread thread : threads) {
if (thread.isAlive()) {
//ThreadMXBean thMxB = ManagementFactory.getThreadMXBean();
//System.out.println(thMxB.getCurrentThreadCpuTime());
running++;
}
}
if(Thread.activeCount()-1<thread_count)
{
flag=true;
break;
}
} while (running > 0);
return flag;
}
If you are getting the error GC Overhead Limit Exceeded then you may try something in between like -Xmx512m first. Also if you have a lot of duplicate strings, you can use String.intern() on them.
You may check this doc:
-XX:+UseConcMarkSweepGC
Check out this link to know what GC Overhead Limit Exceeded error isGC overhead limit exceeded.
As the page suggests, out of memory error occurs when the program spends too much time in garbage collection. So, the problem is not with the number of computations you do...it is with the way you have implemented it. You might have a loop creating too many variables or something like that, so a string buffer might not help you.

Multithreading and Virtual Memory System

I'm trying to model a virtual memory system. What I would like to do is simulate multiple concurrent user processes using multi-threading.
I'm going to take in, through the command line: page size (bytes as ints), the number of pages each process gets in the simulated logical memory, the number of frames in the corresponding simulated physical memory, the number of total processes to simulate, and the actual logical addresses for each process (which will just be arbitrary ints contained in text files).
I guess what is mainly relevant to my question is how to create the n threads on input and fill each thread's array of logical addresses. I'm reading into multithreading but struggling a bit.
I'm trying to work with this thread code:
class Proccess implements Runnable {
Thread t;
List<Integer> addresses = new ArrayList<>();
Proccess(String pNum) {
t = new Thread(this, pNum);
System.out.println("Child " + t.getName());
t.start();
}
public void run() {
try {
//Is this where I want to fill in the addresses list?
//I will be reading in a file to do this. Each
//file is unique for each individual process
//so I don't have to worry about multiple processes
//accessing the same file.
} catch (InterruptedException e) {
System.out.println("Interrupted.");
}
System.out.println(".");
}
}
Each process is going to have its own page table as well, and I am open to suggestions on how to effectively add/maintain that.
This isn't for school, so there are no specs I need to follow.

NioSocketChannel$WriteRequestQueue causing OutOfMemory

I am using Netty to perform large file upload. It works fine but the RAM used by the client seems to increase with the size of the file. This is not the expected behaviour since everything is piped from the Reading the source file to writing the target file.
At first, I thought about a kind of adaptive buffer growing up until Xmx is reached but setting Xmx to a reasonable value (50M) would lead to an OutOfMemoryError soon after starting upload.
After some research using Eclipse Memory Analyzer, it appears that the object retaining the heap memory is:
org.jboss.netty.channel.socket.nio.NioSocketChannel$WriteRequestQueue
Is there any option for setting a limit to this queue or do I have to code my own queue using ChannelFutures to control the number of bytes and block the pipe when the limit is reached?
Thanks for your help,
Regards,
Renaud
Answer from #normanmaurer on Netty Github
You should use
Channel.isWritable()
to check if the "queue" is full. If so you will need to check if there is enough space to write more. So the effect you see can happen if you write data to quickly to get it send out to the clients.
You can get around this kind of problems when try to write a File via DefaultFileRegion or ChunkedFile.
#normanmaurer thank you I missed this method of the Channel!
I guess I need to read what's happening inside:
org.jboss.netty.handler.stream.ChunkedWriteHandler
UPDATED: 2012/08/30
This is the code I made to solve my problem:
public class LimitedChannelSpeaker{
Channel channel;
final Object lock = new Object();
long maxMemorySizeB;
long size = 0;
Map<ChannelBufferRef, Integer> buffer2readablebytes = new HashMap<ChannelBufferRef, Integer>();
public LimitedChannelSpeaker(Channel channel, long maxMemorySizeB) {
this.channel= channel;
this.maxMemorySizeB = maxMemorySizeB;
}
public ChannelFuture speak(ChannelBuffer buff) {
if (buff.readableBytes() > maxMemorySizeB) {
throw new IndexOutOfBoundsException("The buffer is larger than the maximum allowed size of " + maxMemorySizeB + "B.");
}
synchronized (lock) {
while (size + buff.readableBytes() > maxMemorySizeB) {
try {
lock.wait();
} catch (InterruptedException ex) {
throw new RuntimeException(ex);
}
}
ChannelBufferRef ref = new ChannelBufferRef(buff);
ref.register();
ChannelFuture future = channel.write(buff);
future.addListener(new ChannelBufferRef(buff));
return future;
}
}
private void spoken(ChannelBufferRef ref) {
synchronized (lock) {
ref.unregister();
lock.notifyAll();
}
}
private class ChannelBufferRef implements ChannelFutureListener {
int readableBytes;
public ChannelBufferRef(ChannelBuffer buff) {
readableBytes = buff.readableBytes();
}
public void unregister() {
buffer2readablebytes.remove(this);
size -= readableBytes;
}
public void register() {
buffer2readablebytes.put(this, readableBytes);
size += readableBytes;
}
#Override
public void operationComplete(ChannelFuture future) throws Exception {
spoken(this);
}
}
}
for a Desktop background application
Netty is designed for highly scalable servers e.g. around 10,000 connections. For a desktop application with less than a few hundred connections, I would use plain IO. You may find the code is much simpler and it should use less than 1 MB.

How to make the java system release Soft References?

I'm going to use a SoftReference-based cache (a pretty simple thing by itself). However, I've came across a problem when writing a test for it.
The objective of the test is to check if the cache does request the previously cached object from the server again after the memory cleanup occurs.
Here I find the problem how to make system to release soft referenced objects. Calling System.gc() is not enough because soft references will not be released until the memory is low. I'm running this unit test on the PC so the memory budget for the VM could be pretty large.
================== Added later ==============================
Thank you all who took care to answer!
After considering all pro's and contra's I've decided to go the brute force way as advised by nanda and jarnbjo. It appeared, however, that JVM is not that dumb - it won't even attempt garbage collecting if you ask for a block which alone is bigger than VM's memory budget. So I've modified the code like this:
/* Force releasing SoftReferences */
try {
final List<long[]> memhog = new LinkedList<long[]>();
while(true) {
memhog.add(new long[102400]);
}
}
catch(final OutOfMemoryError e) {
/* At this point all SoftReferences have been released - GUARANTEED. */
}
/* continue the test here */
This piece of code forces the JVM to flush all SoftReferences. And it's very fast to do.
It's working better than the Integer.MAX_VALUE approach, since here the JVM really tries to allocate that much memory.
try {
Object[] ignored = new Object[(int) Runtime.getRuntime().maxMemory()];
} catch (OutOfMemoryError e) {
// Ignore
}
I now use this bit of code everywhere I need to unit test code using SoftReferences.
Update: This approach will indeed work only with less than 2G of max memory.
Also, one need to be very careful with SoftReferences. It's so easy to keep a hard reference by mistake that will negate the effect of SoftReferences.
Here is a simple test that shows it working every time on OSX. Would be interested in knowing if JVM's behavior is the same on Linux and Windows.
for (int i = 0; i < 1000; i++) {
SoftReference<Object> softReference = new SoftReferencelt<Object>(new Object());
if (null == softReference.get()) {
throw new IllegalStateException("Reference should NOT be null");
}
try {
Object[] ignored = new Object[(int) Runtime.getRuntime().maxMemory()];
} catch (OutOfMemoryError e) {
// Ignore
}
if (null != softReference.get()) {
throw new IllegalStateException("Reference should be null");
}
System.out.println("It worked!");
}
An improvement that will work for more than 2G max memory. It loops until an OutOfMemory error occurs.
#Test
public void shouldNotHoldReferencesToObject() {
final SoftReference<T> reference = new SoftReference<T>( ... );
// Sanity check
assertThat(reference.get(), not(equalTo(null)));
// Force an OoM
try {
final ArrayList<Object[]> allocations = new ArrayList<Object[]>();
int size;
while( (size = Math.min(Math.abs((int)Runtime.getRuntime().freeMemory()),Integer.MAX_VALUE))>0 )
allocations.add( new Object[size] );
} catch( OutOfMemoryError e ) {
// great!
}
// Verify object has been garbage collected
assertThat(reference.get(), equalTo(null));
}
Set the parameter -Xmx to a very
small value.
Prepare your soft
reference
Create as many object as
possible. Ask for the object everytime until it asked the object from server again.
This is my small test. Modify as your need.
#Test
public void testSoftReference() throws Exception {
Set<Object[]> s = new HashSet<Object[]>();
SoftReference<Object> sr = new SoftReference<Object>(new Object());
int i = 0;
while (true) {
try {
s.add(new Object[1000]);
} catch (OutOfMemoryError e) {
// ignore
}
if (sr.get() == null) {
System.out.println("Soft reference is cleared. Success!");
break;
}
i++;
System.out.println("Soft reference is not yet cleared. Iteration " + i);
}
}
You could explicitly set the soft reference to null in your test, and as such simulate that the soft reference has been released.
This avoids any complicated test setup that is memory and garbage collection dependend.
Instead of a long running loop (as suggested by nanda), it's probably faster and easier to simply create a huge primitive array to allocate more memory than available to the VM, then catch and ignore the OutOfMemoryError:
try {
long[] foo = new long[Integer.MAX_VALUE];
}
catch(OutOfMemoryError e) {
// ignore
}
This will clear all weak and soft references, unless your VM has more than 16GB heap available.

Categories