NioSocketChannel$WriteRequestQueue causing OutOfMemory - java

I am using Netty to perform large file upload. It works fine but the RAM used by the client seems to increase with the size of the file. This is not the expected behaviour since everything is piped from the Reading the source file to writing the target file.
At first, I thought about a kind of adaptive buffer growing up until Xmx is reached but setting Xmx to a reasonable value (50M) would lead to an OutOfMemoryError soon after starting upload.
After some research using Eclipse Memory Analyzer, it appears that the object retaining the heap memory is:
org.jboss.netty.channel.socket.nio.NioSocketChannel$WriteRequestQueue
Is there any option for setting a limit to this queue or do I have to code my own queue using ChannelFutures to control the number of bytes and block the pipe when the limit is reached?
Thanks for your help,
Regards,
Renaud

Answer from #normanmaurer on Netty Github
You should use
Channel.isWritable()
to check if the "queue" is full. If so you will need to check if there is enough space to write more. So the effect you see can happen if you write data to quickly to get it send out to the clients.
You can get around this kind of problems when try to write a File via DefaultFileRegion or ChunkedFile.
#normanmaurer thank you I missed this method of the Channel!
I guess I need to read what's happening inside:
org.jboss.netty.handler.stream.ChunkedWriteHandler
UPDATED: 2012/08/30
This is the code I made to solve my problem:
public class LimitedChannelSpeaker{
Channel channel;
final Object lock = new Object();
long maxMemorySizeB;
long size = 0;
Map<ChannelBufferRef, Integer> buffer2readablebytes = new HashMap<ChannelBufferRef, Integer>();
public LimitedChannelSpeaker(Channel channel, long maxMemorySizeB) {
this.channel= channel;
this.maxMemorySizeB = maxMemorySizeB;
}
public ChannelFuture speak(ChannelBuffer buff) {
if (buff.readableBytes() > maxMemorySizeB) {
throw new IndexOutOfBoundsException("The buffer is larger than the maximum allowed size of " + maxMemorySizeB + "B.");
}
synchronized (lock) {
while (size + buff.readableBytes() > maxMemorySizeB) {
try {
lock.wait();
} catch (InterruptedException ex) {
throw new RuntimeException(ex);
}
}
ChannelBufferRef ref = new ChannelBufferRef(buff);
ref.register();
ChannelFuture future = channel.write(buff);
future.addListener(new ChannelBufferRef(buff));
return future;
}
}
private void spoken(ChannelBufferRef ref) {
synchronized (lock) {
ref.unregister();
lock.notifyAll();
}
}
private class ChannelBufferRef implements ChannelFutureListener {
int readableBytes;
public ChannelBufferRef(ChannelBuffer buff) {
readableBytes = buff.readableBytes();
}
public void unregister() {
buffer2readablebytes.remove(this);
size -= readableBytes;
}
public void register() {
buffer2readablebytes.put(this, readableBytes);
size += readableBytes;
}
#Override
public void operationComplete(ChannelFuture future) throws Exception {
spoken(this);
}
}
}

for a Desktop background application
Netty is designed for highly scalable servers e.g. around 10,000 connections. For a desktop application with less than a few hundred connections, I would use plain IO. You may find the code is much simpler and it should use less than 1 MB.

Related

Java Socket InputStream reads data but return in the wrong order

I developed an application using Java socket. I am exchanging messages with this application with the help of byte arrays. I have a message named M1, 1979 bytes long. My socket buffer length is 512 bytes. I read this message in 4 parts, each with 512 bytes, but the last one is of course 443 bytes. I will name these parts like A, B, C, and D. So ABCD is a valid message of mine respectively.
I have a loop with a thread which is like below.
BlockingQueue<Chunk> queue = new LinkedBlockingQueue<>();
InputStream in = socket.getInputStream()
byte[] buffer = new byte[512];
while(true) {
int readResult = in.read(buffer);
if(readResult != -1) {
byte[] arr = Arrays.copyOf(buffer, readResult);
Chunk c = new Chunk(arr);
queue.put(c);
}
}
I'm filling the queue with the code above. When the message sending starts, I see the queue fill up in ABCD form but sometimes I put the data in the queue as a BACD. But I know that this is impossible because the TCP connection guarantees the order.
I looked at the dumps with Wireshark. This message comes correctly with a single tcp package. So there is no problem on the sender side. I am 100% sure that the message has arrived correctly but the read method does not seem to read in the correct order and this situation doesn't always happen. I could not find a valid reason for what caused this.
When I tried the same code on two different computers I noticed that the problem was in only one. The jdk versions on these computers are different. I looked at the version differences between the two jdk versions. When the Jdk version is "JDK 8u202", I am getting the situation where it works incorrectly. When I tried it with jdk 8u271, there was no problem. Maybe it is related to that but I wasn't sure. Because I have no valid evidence.
I am open to all kinds of ideas and suggestions. It's really on its way to being the most interesting problem I've ever encountered.
Thank you for your help.
EDIT: I found similar question.
Blocking Queue Take out of Order
EDIT:
Ok, I have read all the answers given below. Thank you for providing different perspectives for me. I will try to supplement some missing information.
Actually I have 2 threads. Thread 1(SocketReader) is responsible for reading socket. It wraps the information it reads with a Chunk class and puts it on the queue in the other Thread 2. So queue is in Thread 2. Thread 2(MessageDecoder) is consuming the blocking queue. There are no threads other than these. Actually this is a simple example of a "producer consumer design patter".
And yes, other messages are sent, but other messages take up less than 512 bytes. Therefore, I can read in one go. I do not encounter any sort problem.
MessageDecoder.java
public class MessageDecoder implements Runnable{
private BlockingQueue<Chunk> queue = new LinkedBlockingQueue<>();
public MessageDecoder() {
}
public void run() {
while(true) {
Chunk c;
try {
c = queue.take();
System.out.println(c.toString());
} catch (InterruptedException e) {
e.printStackTrace();
}
decodeMessageChunk(c);
}
}
public void put(Chunk c) {
try {
queue.put(c);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
SocketReader.java
public class SocketReader implements Runnable{
private final MessageDecoder msgDec;
private final InputStream in;
byte[] buffer = new byte[512];
public SocketReader(InputStream in, MessageDecoder msgDec) {
this.in = in;
this.msgDec = msgDec;
}
public void run() {
while(true) {
int readResult = in.read(buffer);
if(readResult != -1) {
byte[] arr = Arrays.copyOf(buffer, readResult);
Chunk c = new Chunk(arr);
msgDec.put(c);
}
}
}
}
Even if it's a FIFO queue, the locking of the LinkedBloquingQueue is unfair, so you can't guarantee the ordering of elements. More info regarding this here
I'd suggest using an ArrayBlockingQueue instead. Like the LinkedBloquingQueue, the order is not guaranteed but offers a slightly different locking mechanism.
This class supports an optional fairness policy for ordering waiting
producer and consumer threads. By default, this ordering is not
guaranteed. However, a queue constructed with fairness set to true
grants threads access in FIFO order. Fairness generally decreases
throughput but reduces variability and avoids starvation.
In order to set fairness, you must initialize it using this constructor:
So, for example:
ArrayBlockingQueue<Chunk> fairQueue = new ArrayBlockingQueue<>(1000, true);
/*.....*/
Chunk c = new Chunk(arr);
fairQueue.add(c);
As the docs state, this should grant thread access in FIFO order, guaranteeing the retrievement of the elements to be consistent while avoiding possible locking robbery happening in LinkedBloquingQueue's lock mechanism.

how to emulate full gc by many StackTraceElement in heap

Recently My operation colleague report production environment have many full gc, and influence app response time. And he supply an image
he especially said StackTraceElement have 85M, and suggests not have these code , e.g.
e.printStackTrace();
Now I want to simulate this situation in my local, and I write a test code like below
public class FullGCByLogTest {
private static final Logger log = Logger.getLogger(FullGCByLogTest.class);
public static final byte[] _1M = new byte[1 * 1024 * 1024]; //placeholder purpose
public static void main(String[] args) throws InterruptedException {
int nThreads = 1000; // concurrent count
ExecutorService pool = Executors.newFixedThreadPool(nThreads);
while (true) {
final CountDownLatch latch = new CountDownLatch(nThreads);
for (int i = 0; i < nThreads; i++) {
pool.submit(new Runnable() {
#Override
public void run() {
latch.countDown();
try {
latch.await(); // waiting for execute below code concurrently
} catch (InterruptedException e1) {
}
try {
int i = 1 / 0;
System.out.println(i);
} catch (Exception e) {
e.printStackTrace();
// log.error(e.getMessage(), e);
}
}
});
}
try {
Thread.sleep(100); // interval 1s every concurrent calling
} catch (InterruptedException e) {
}
}
}
}
and I run this class with these vm args
-Xmx4m -Xms4m -XX:NewSize=1m -XX:MaxNewSize=1m -XX:+PrintGCDetails
then in jvisualvm VisualGC I found old gen is 7 M, but I set max heap is 4m.
in addition in heapdump I did not find StackTraceElement. So how could I emulate this problem successfully?
The StackTraceElement objects are actually created when an exception object is instantiated, and they will be eligible for garbage collection as soon as the exception object is unreachable.
I suspect that the real cause for your (apparent) storage leak is that something in your code is saving lots of exception objects.
Calling printStackTrace() does not leak objects. Your colleague has misdiagnosed the problem. However calling printStackTrace() all over the place is ugly ... and if it happens frequently, that will lead to performance issues.
Your simulation and the results are a red herring, but the probable reason that the heap is bigger than you asked for is that the JVM has "rounded up" to a larger heap size. (4Mb is a miniscule heap size, and impractical for most Java programs.)
So how could I emulate this problem successfully?
Emulation is highly unlikely to tell you anything useful. You need to get hold of a heap dump from the production system and analyze that.

Retryable pattern for file processing in java

I need to process a large file (with columns and same format lines). Since I need to consider the cases that the program crashes during the processing, I need this processing program to be retryable, which means after it crashes and I start the program again, it can continue to process the file starting with the line it failed.
Is there any pattern I can follow or library I can use? Thank you!
Update:
About the crashing cases, it is not just about OOM or some internal issues. It also could be caused by the timeout with other parts or machine crashing. So try/catch can't handle this.
Another update:
About the chunking the file, it is feasible in my case but not that as simple as it sounds. As I said, the file is formatted with several columns and I can split it up into hundreds of files based on one of the column and then process the files one by one. But instead of doing this, I would like to learn more about the common solution about processing big file/data supporting retrying.
How I would do it (though am not a pro)
Create a LineProcessor called on every line in file
class Processor implements LineProcessor> {
private List<String> lines = Lists.newLinkedList();
private int startFrom = 0;
private int lineNumber = 0;
public Processor(int startFrom) {
this.startFrom = startFrom;
}
#Override
public List<String> getResult() {
return lines;
}
#Override
public boolean processLine(String arg0) throws IOException {
lineNumber++;
if (lineNumber < startFrom) {
// do nothing
} else {
if (new Random().nextInt() % 50000 == 0) {
throw new IOException("Randomly thrown Exception " + lineNumber);
}
//Do the hardwork here
lines.add(arg0);
startFrom++;
}
return true;
}
}
Create a Callable for Reading Files that makes use of my LineProcessor
class Reader implements Callable<List<String>> {
private int startFrom;
public Reader(int startFrom) {
this.startFrom = startFrom;
}
#Override
public List<String> call() throws Exception {
return Files.readLines(new File("/etc/dictionaries-common/words"),
Charsets.UTF_8, new Processor(startFrom));
}
}
Wrap the Callable in a Retryer and call it using an Executor
public static void main(String[] args) throws InterruptedException, ExecutionException {
BasicConfigurator.configure();
ExecutorService executor = Executors.newSingleThreadExecutor();
Future<List<String>> lines = executor.submit(RetryerBuilder
.<List<String>> newBuilder()
.retryIfExceptionOfType(IOException.class)
.withStopStrategy(StopStrategies.stopAfterAttempt(100)).build()
.wrap(new Reader(100)));
logger.debug(lines.get().size());
executor.shutdown();
logger.debug("Happily Ever After");
}
You could maintain a checkpoint/commit style logic in your code. So when the program runs again it starts from the same checkpoint.
You can use RandomAccessFile to read the file and use the getFilePointer() as your checkpoint which you preserver. When you execute the program again you start with this checkpoint by calling seek(offset).
Try catch won's save from OOM error. You should process the file in chunks and store the location after every successfull chunck into filesystem/database/what ever place where it remains persistent even if your program crashes. Then you can read the previous point from the place you stored it when you restart your software. You must also cleanup this information when the whole file is processed.

Why is Java constantly eating more memory?

So i have this small client side code
public class Client {
private static Socket socket;
private static ObjectOutputStream out;
public static void main(String[] args) {
while (true) {
try {
if (socket != null) {
out.writeObject("Hello...");
Thread.sleep(1500);
} else {
socket = new Socket("myhost", 1234);
out = new ObjectOutputStream(socket.getOutputStream());
System.out.println("connected to server");
}
} catch (final Exception e) {
//set socket to null for reconnecting
}
}
}
}
What bugs me is that when i run the code with javaw.exe, i see that the java is eating ~10kb more memory every 2-3 sec. So memory usage keeps growing and growing...
Is java really that bad or is there something else wrong?
I ran this code in while loop for a while and memory usage increased for 1000 kb.
Doesn't java gargabe collect the 'tmp' variable after it's used?
try {
if (socket == null) {
final Socket tmp = new Socket("localhost", 1234);
if (tmp != null) {
socket = tmp;
}
Thread.sleep(100);
}
} catch (final Exception e) {
}
So, I've written a simple test server for your client and I'm now running both, and there seems to be no increase in memory usage.
import java.net.*;
import java.io.*;
/**
* example class adapted from
* http://stackoverflow.com/questions/5122569/why-is-java-constantly-eating-more-memory
*/
public class Client {
private static Socket socket;
private static ObjectOutputStream out;
private static void runClient() {
while (true) {
try {
if (socket != null) {
out.writeObject("Hello...");
Thread.sleep(100);
System.out.print(",");
} else {
socket = new Socket("localhost", 1234);
out = new ObjectOutputStream(socket.getOutputStream());
System.out.println("connected to server");
}
} catch (final Exception e) {
//set socket to null for reconnecting
e.printStackTrace();
return;
}
}
}
private static void runServer() throws IOException{
ServerSocket ss = new ServerSocket(1234);
Socket s = ss.accept();
InputStream in = s.getInputStream();
byte[] buffer = new byte[500];
while(in.read(buffer) > 0) {
System.out.print(".");
}
}
public static void main(String[] args)
throws IOException
{
if(args.length > 0) {
runServer();
}
else {
runClient();
}
}
}
What are you doing different?
So, I've looked a bit more detailed at the memory usage of this program, and found for this a useful tool, the "Java Monitoring and Management console" hidden in the development menu of my system :-)
Here is a screenshot of the memory usage while running the client program some time (each 100 ms I send an object, remember) ...
We can see that the memory usage has a saw tooth curve - it is lineary increasing, then comes a garbage collection and it is falling down to the base usage. After some initial period the VM is doing the GC more often (and thus more quickly). For now, no problem.
Here is a variant program where I did not send the same string always, but a different one each time:
private static void runClient() {
int i = 0;
while (true) {
try {
i++;
if (socket != null) {
out.writeObject("Hello " + i + " ...");
Thread.sleep(100);
System.out.print(",");
(The rest is like above). I thought this would need more memory, since the ObjectOutputStream has to remember which Objects are already sent, to be able to reuse their identifiers in case they come again.
But no, it looks quite similar:
The little irregularity between 39 and 40 is a manual full GC made by the "Perform GC" button here - it did not change much, though.
I let the last program run a bit longer, and now we see that the ObjectOutputStream still is holding references to our Strings ...
In half an hour our program ate about 2 MB of memory (on a 64-bit-VM). In this time, it sent 18000 Strings. So, each of the Strings used in average about 100 bytes of memory.
Each of those Strings was between 11 and 17 chars long. the latter ones (about the half) are using in fact 32-char-arrays, the former ones 16-char-arrays, because of the allocation strategy of the StringBuilder. These take 64 or 32 bytes + array-overhead (at least 12 more bytes, more likely more). Additionally the String objects themselves take some memory overhead (at least 8+8+4+4+4 = 28 for class and the fields I remember, more likely more), so we have in average (at least) 88 bytes per String. In addition there likely is some overhead in the ObjectOutputStream to maintain these objects in some data structure.
So, not much more lost than in fact needed.
Ah, one tip of how to avoid the ObjectOutputStream (and the corresponding ObjectInputStream, too) storing the objects, if you don't plan on sending any of them again: Invoke its reset method every some thousand strings or so.
Here is a last screenshot before I kill the program, after a bit more than an hour:
For comparison, I added the named reset and let the program run two more hours (and a bit):
It still collects memory as before, but now when I click on "Perform GC" it cleans everything and goes back at the state before (just a bit over 1 MB). (It would do the same when coming at the end of Heap, but I didn't want to wait this long.)
Well the garbage collector never runs when a variable actually goes out of scope, or you'd spend most of your time in the GC code.
What it does instead (and this is quite a simplification) is it waits until your memory used reaches a threshold, and only then does it start releasing memory.
This is what you're seeing, your memory consumption is increasing so slowly that it'll take a long time to reach the next threshold and actually free memory.
I don't think adding close is your problem because from what i think you are trying to do is keep writing to the stream. Have you tried out.flush(). This flushes the content so that its not in memory anymore.
It looks like you never close the Socket or flush the ObjectOutputStream. Also note that Java garbage collection basically happens not when you want it to but when the garbage collector sees fit.
IO is cached by the Socket implementation until it is flushed. So either you really read the input / output from the socket (or call #flush() on your streams) or you close the socket.
To me logic itself is culPrit, there is no condition to come out of the while loop.
Again no flush.
ObjectOutputStream cached every object you send, in case you send it again. To clear this you need to call the reset() method
Reset will disregard the state of any objects already written to the stream. The state is reset to be the same as a new ObjectOutputStream. The current point in the stream is marked as reset so the corresponding ObjectInputStream will be reset at the same point. Objects previously written to the stream will not be refered to as already being in the stream. They will be written to the stream again.
BTW: 10 KB is worth about 0.1 cents of memory. One minute of your time at minimum wage is worth 100x times this. I suggest you consider what is the best use of your time.

How to make the java system release Soft References?

I'm going to use a SoftReference-based cache (a pretty simple thing by itself). However, I've came across a problem when writing a test for it.
The objective of the test is to check if the cache does request the previously cached object from the server again after the memory cleanup occurs.
Here I find the problem how to make system to release soft referenced objects. Calling System.gc() is not enough because soft references will not be released until the memory is low. I'm running this unit test on the PC so the memory budget for the VM could be pretty large.
================== Added later ==============================
Thank you all who took care to answer!
After considering all pro's and contra's I've decided to go the brute force way as advised by nanda and jarnbjo. It appeared, however, that JVM is not that dumb - it won't even attempt garbage collecting if you ask for a block which alone is bigger than VM's memory budget. So I've modified the code like this:
/* Force releasing SoftReferences */
try {
final List<long[]> memhog = new LinkedList<long[]>();
while(true) {
memhog.add(new long[102400]);
}
}
catch(final OutOfMemoryError e) {
/* At this point all SoftReferences have been released - GUARANTEED. */
}
/* continue the test here */
This piece of code forces the JVM to flush all SoftReferences. And it's very fast to do.
It's working better than the Integer.MAX_VALUE approach, since here the JVM really tries to allocate that much memory.
try {
Object[] ignored = new Object[(int) Runtime.getRuntime().maxMemory()];
} catch (OutOfMemoryError e) {
// Ignore
}
I now use this bit of code everywhere I need to unit test code using SoftReferences.
Update: This approach will indeed work only with less than 2G of max memory.
Also, one need to be very careful with SoftReferences. It's so easy to keep a hard reference by mistake that will negate the effect of SoftReferences.
Here is a simple test that shows it working every time on OSX. Would be interested in knowing if JVM's behavior is the same on Linux and Windows.
for (int i = 0; i < 1000; i++) {
SoftReference<Object> softReference = new SoftReferencelt<Object>(new Object());
if (null == softReference.get()) {
throw new IllegalStateException("Reference should NOT be null");
}
try {
Object[] ignored = new Object[(int) Runtime.getRuntime().maxMemory()];
} catch (OutOfMemoryError e) {
// Ignore
}
if (null != softReference.get()) {
throw new IllegalStateException("Reference should be null");
}
System.out.println("It worked!");
}
An improvement that will work for more than 2G max memory. It loops until an OutOfMemory error occurs.
#Test
public void shouldNotHoldReferencesToObject() {
final SoftReference<T> reference = new SoftReference<T>( ... );
// Sanity check
assertThat(reference.get(), not(equalTo(null)));
// Force an OoM
try {
final ArrayList<Object[]> allocations = new ArrayList<Object[]>();
int size;
while( (size = Math.min(Math.abs((int)Runtime.getRuntime().freeMemory()),Integer.MAX_VALUE))>0 )
allocations.add( new Object[size] );
} catch( OutOfMemoryError e ) {
// great!
}
// Verify object has been garbage collected
assertThat(reference.get(), equalTo(null));
}
Set the parameter -Xmx to a very
small value.
Prepare your soft
reference
Create as many object as
possible. Ask for the object everytime until it asked the object from server again.
This is my small test. Modify as your need.
#Test
public void testSoftReference() throws Exception {
Set<Object[]> s = new HashSet<Object[]>();
SoftReference<Object> sr = new SoftReference<Object>(new Object());
int i = 0;
while (true) {
try {
s.add(new Object[1000]);
} catch (OutOfMemoryError e) {
// ignore
}
if (sr.get() == null) {
System.out.println("Soft reference is cleared. Success!");
break;
}
i++;
System.out.println("Soft reference is not yet cleared. Iteration " + i);
}
}
You could explicitly set the soft reference to null in your test, and as such simulate that the soft reference has been released.
This avoids any complicated test setup that is memory and garbage collection dependend.
Instead of a long running loop (as suggested by nanda), it's probably faster and easier to simply create a huge primitive array to allocate more memory than available to the VM, then catch and ignore the OutOfMemoryError:
try {
long[] foo = new long[Integer.MAX_VALUE];
}
catch(OutOfMemoryError e) {
// ignore
}
This will clear all weak and soft references, unless your VM has more than 16GB heap available.

Categories