First time working with Netty here, I'm having a bug with my ByteToMessageDecoder class and I'm not able to figure out what's going wrong.
I'm repeatedly sending a fixed length packet from my client to the server like so:
public void sendPacket(Packet packet)
{
ByteBuf buf = Unpooled.wrappedBuffer(packet.getBytes());
future.channel().writeAndFlush(buf);
}
The client pipeline only contains a working LengthFieldPrepender that prepends the length of a short.
My server decoder works properly for a random length of time (usually 30 - 60 seconds) and then starts infinitely looping.
public class TestDecoder extends ByteToMessageDecoder
{
#Override
protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) throws Exception
{
if (in.readableBytes() < Short.BYTES)
return;
int packetLength = in.readShort();
if (in.readableBytes() < packetLength)
{
in.resetReaderIndex();
return;
}
System.out.println(packetLength + " " + in.readableBytes());
out.add(in.readBytes(packetLength));
}
}
After some time, my decoder gets stuck in an infinite loop where the packetLength is the correct value, but the in.readableBytes() have increased to be greater than the packetLength.
When this happens it seems like the input buffer bytes aren't being read to the output list anymore, so it's stuck infinitely repeating and no longer accepting any new bytes being sent to the server (in.readableBytes() never changes).
What am I doing wrong?
Before int packetLength = in.readShort(); you have to mark read index:
in.markReaderIndex();
Or use a LengthFieldBasedFrameDecoder which is a built-in decoder that splits the received ByteBuf dynamically by the value of the length field in the message.
Related
I have to transfer larger files (upto 10GB) using UDP. Unfortunately TCP cannot be used in this use case because there is no bidirectional communication between sender and receiver possible.
Sending a file is not the problem. I have written the client using netty. It reads the file, encodes it (unique ID, position in stream and so on) and sends it to the destination at a configurable rate (packets per seconds). All the packets are received at the destination. I have used iptables and Wireshark to verify that.
The problem occurs with the recipient. Receiving upto 90K packets a second works pretty fine. But receiving and decoding it at this rate is not possible using a single thread.
My first approach was to use thread safe queues (one producer and multiple consumer). But using multiple consumers did not lead to better results. Some packets were still lost. It seems that the overhead (locking/unlocking the queue) slows down the process. So I decided to use lmax disruptor with a single producer (receiving the UDP datagrams) and multiple consumer (decoding the packet). But surprisingly, this does not lead to success either. It is hardly a speed advantage to use two lmax consumers and I wonder why.
This is main part receiving UDP packets and call the disruptor
public void receiveUdpStream(DatagramChannel channel) {
boolean exit = false;
// the size of the UDP datagram
int size = shareddata.cr.getDatagramsize();
// the number of decoders (configurable)
int nn_decoders = shareddata.cr.getDecoders();
Udp2flowEventFactory factory = new Udp2flowEventFactory(size);
// the size of the ringbuffer
int bufferSize = 1 << 10;
Disruptor<Udp2flowEvent> disruptor = new Disruptor<>(
factory,
bufferSize,
DaemonThreadFactory.INSTANCE,
ProducerType.SINGLE,
new YieldingWaitStrategy());
// my consumers
Udp2flowDecoder decoder[] = new Udp2flowDecoder[nn_decoders];
for (int i = 0; i < nn_decoders; i++) {
decoder[i] = new Udp2flowDecoder(i, shareddata);
}
disruptor.handleEventsWith(decoder);
RingBuffer<Udp2flowEvent> ringBuffer = disruptor.getRingBuffer();
Udp2flowProducer producer = new Udp2flowProducer(ringBuffer);
disruptor.start();
while (!exit) {
try {
ByteBuffer buf = ByteBuffer.allocate(size);
channel.receive(buf);
receivedDatagrams++; // countig the received packets
buf.flip();
producer.onData(buf);
} catch (Exception e) {
logger.debug("got exeception " + e);
exit = true;
}
}
}
My lmax event is simple...
public class Udp2flowEvent {
ByteBuffer buf;
Udp2flowEvent(int size) {
this.buf = ByteBuffer.allocateDirect(size);
}
public void set(ByteBuffer buf) {
this.buf = buf;
}
public ByteBuffer getEvent() {
return this.buf;
}
}
And this is my factory
public class Udp2flowEventFactory implements EventFactory<Udp2flowEvent> {
private int size;
Udp2flowEventFactory(int size) {
super();
this.size = size;
}
public Udp2flowEvent newInstance() {
return new Udp2flowEvent(size);
}
}
The producer ...
public class Udp2flowProducer {
private final RingBuffer<Udp2flowEvent> ringBuffer;
public Udp2flowProducer(RingBuffer<Udp2flowEvent> ringBuffer)
{
this.ringBuffer = ringBuffer;
}
public void onData(ByteBuffer buf)
{
long sequence = ringBuffer.next(); // Grab the next sequence
try
{
Udp2flowEvent event = ringBuffer.get(sequence);
event.set(buf);
}
finally
{
ringBuffer.publish(sequence);
}
}
}
The interesting but very simple part is the decoder. It looks like this.
public void onEvent(Udp2flowEvent event, long sequence, boolean endOfBatch) {
// each consumer decodes its packets
if (sequence % nn_decoders != decoderid) {
return;
}
ByteBuffer buf = event.getEvent();
event = null; // is it faster to null the event?
shareddata.increaseReceiveddatagrams();
// headertype
// some code omitted. But the code looks something like this
final int headertype = buf.getInt();
final int headerlength = buf.getInt();
final long payloadlength = buf.getLong();
// decoding int and longs works fine.
// but decoding the remaining part not!
byte[] payload = new byte[buf.remaining()];
buf.get(payload);
// some code omitted. The payload is used later on...
}
And here are some interesting facts:
all decoders work well. I see the number of decoders running
all packets are received but the decoding takes too long. More precisely: decoding the first two ints and the long value works fine but decoding the payload takes too long. This leads to a 'backpressure' and some packets are lost.
Fun fact: The code works pretty fine on my MacBook Air but does not work on my server. (MacBook: Core i7; Server: ESXi with 8 virtual Cores on a Xeon #2.6Ghz and no load at all).
Now my questions and I hope that somebody has an idea:
why does it hardly make a difference to use several consumers? The difference is only 5%
In general: What is the best way to receive 60K (or more) UDP packets and decode it? I tried netty as receiver but UDP does not scale very well.
Why is decoding the payload so slow?
Are there any errors that I have overlooked?
Should I use another producer / consumer library? LMAX has a very low latency but what's about throughput?
Ring Buffers don't seem like the right tec for this problem because when a ring buffer has filled all it's capacity it will block and it is also an inherently sequential architecture. You need to know in advance the highest number of packets to expect and size for that. Also UDP is lossy unless you implement a message assurance protocol.
Not sure why you say TCP is not bidirectional, it is and it takes care of lost packets.
To cope with data flooding, you may need to distribute the incoming packets to separate servers if a single one is insufficient. A queue should work to absorb a flood of data. You may need a massive number of decoders awaiting if you want to process this volume of data in near real time.
Suggest you use TCP.
For the life of me, I haven't been able to find a question that matches what I'm trying to do, so I'll explain what my use-case is here. If you know of a topic that already covers the answer to this, please feel free to direct me to that one. :)
I have a piece of code that uploads a file to Amazon S3 periodically (every 20 seconds). The file is a log file being written by another process, so this function is effectively a means of tailing the log so that someone can read its contents in semi-real-time without having to have direct access to the machine that the log resides on.
Up until recently, I've simply been using the S3 PutObject method (using a File as input) to do this upload. But in AWS SDK 1.9, this no longer works because the S3 client rejects the request if the content size actually uploaded is greater than the content-length that was promised at the start of the upload. This method reads the size of the file before it starts streaming the data, so given the nature of this application, the file is very likely to have increased in size between that point and the end of the stream. This means that I need to now ensure I only send N bytes of data regardless of how big the file is.
I don't have any need to interpret the bytes in the file in any way, so I'm not concerned about encoding. I can transfer it byte-for-byte. Basically, what I want is a simple method where I can read the file up to the Nth byte, then have it terminate the read even if there's more data in the file past that point. (In other words, insert EOF into the stream at a specific point.)
For example, if my file is 10000 bytes long when I start the upload, but grows to 12000 bytes during the upload, I want to stop uploading at 10000 bytes regardless of that size change. (On a subsequent upload, I would then upload the 12000 bytes or more.)
I haven't found a pre-made way to do this - the best I've found so far appears to be IOUtils.copyLarge(InputStream, OutputStream, offset, length), which can be told to copy a maximum of "length" bytes to the provided OutputStream. However, copyLarge is a blocking method, as is PutObject (which presumably calls a form of read() on its InputStream), so it seems that I couldn't get that to work at all.
I haven't found any methods or pre-built streams that can do this, so it's making me think I'd need to write my own implementation that directly monitors how many bytes have been read. That would probably then work like a BufferedInputStream where the number of bytes read per batch is the lesser of the buffer size or the remaining bytes to be read. (eg. with a buffer size of 3000 bytes, I'd do three batches at 3000 bytes each, followed by a batch with 1000 bytes + EOF.)
Does anyone know a better way to do this? Thanks.
EDIT Just to clarify, I'm already aware of a couple alternatives, neither of which are ideal:
(1) I could lock the file while uploading it. Doing this would cause loss of data or operational problems in the process that's writing the file.
(2) I could create a local copy of the file before uploading it. This could be very inefficient and take up a lot of unnecessary disk space (this file can grow into the several-gigabyte range, and the machine it's running on may be that short of disk space).
EDIT 2: My final solution, based on a suggestion from a coworker, looks like this:
private void uploadLogFile(final File logFile) {
if (logFile.exists()) {
long byteLength = logFile.length();
try (
FileInputStream fileStream = new FileInputStream(logFile);
InputStream limitStream = ByteStreams.limit(fileStream, byteLength);
) {
ObjectMetadata md = new ObjectMetadata();
md.setContentLength(byteLength);
// Set other metadata as appropriate.
PutObjectRequest req = new PutObjectRequest(bucket, key, limitStream, md);
s3Client.putObject(req);
} // plus exception handling
}
}
LimitInputStream was what my coworker suggested, apparently not aware that it had been deprecated. ByteStreams.limit is the current Guava replacement, and it does what I want. Thanks, everyone.
Complete answer rip & replace:
It is relatively straightforward to wrap an InputStream such as to cap the number of bytes it will deliver before signaling end-of-data. FilterInputStream is targeted at this general kind of job, but since you have to override pretty much every method for this particular job, it just gets in the way.
Here's a rough cut at a solution:
import java.io.IOException;
import java.io.InputStream;
/**
* An {#code InputStream} wrapper that provides up to a maximum number of
* bytes from the underlying stream. Does not support mark/reset, even
* when the wrapped stream does, and does not perform any buffering.
*/
public class BoundedInputStream extends InputStream {
/** This stream's underlying #{code InputStream} */
private final InputStream data;
/** The maximum number of bytes still available from this stream */
private long bytesRemaining;
/**
* Initializes a new {#code BoundedInputStream} with the specified
* underlying stream and byte limit
* #param data the #{code InputStream} serving as the source of this
* one's data
* #param maxBytes the maximum number of bytes this stream will deliver
* before signaling end-of-data
*/
public BoundedInputStream(InputStream data, long maxBytes) {
this.data = data;
bytesRemaining = Math.max(maxBytes, 0);
}
#Override
public int available() throws IOException {
return (int) Math.min(data.available(), bytesRemaining);
}
#Override
public void close() throws IOException {
data.close();
}
#Override
public synchronized void mark(int limit) {
// does nothing
}
#Override
public boolean markSupported() {
return false;
}
#Override
public int read(byte[] buf, int off, int len) throws IOException {
if (bytesRemaining > 0) {
int nRead = data.read(
buf, off, (int) Math.min(len, bytesRemaining));
bytesRemaining -= nRead;
return nRead;
} else {
return -1;
}
}
#Override
public int read(byte[] buf) throws IOException {
return this.read(buf, 0, buf.length);
}
#Override
public synchronized void reset() throws IOException {
throw new IOException("reset() not supported");
}
#Override
public long skip(long n) throws IOException {
long skipped = data.skip(Math.min(n, bytesRemaining));
bytesRemaining -= skipped;
return skipped;
}
#Override
public int read() throws IOException {
if (bytesRemaining > 0) {
int c = data.read();
if (c >= 0) {
bytesRemaining -= 1;
}
return c;
} else {
return -1;
}
}
}
I'm trying to send a byte array containing 16 items over sockets using DataOutputStream on the client and DataInputStream on the server.
These are the methods I am using for sending/receiving.
public void sendBytes(byte[] myByteArray) throws IOException {
sendBytes(myByteArray, 0, myByteArray.length);
}
public void sendBytes(byte[] myByteArray, int start, int len) throws IOException {
if (len < 0)
throw new IllegalArgumentException("Negative length not allowed");
if (start < 0 || start >= myByteArray.length)
throw new IndexOutOfBoundsException("Out of bounds: " + start);
dOutput.writeInt(len);
if (len > 0) {
dOutput.write(myByteArray, start, len);
dOutput.flush();
}
}
public byte[] readBytes() throws IOException {
int len = dInput.readInt();
System.out.println("Byte array length: " + len); //prints '16'
byte[] data = new byte[len];
if (len > 0) {
dInput.readFully(data);
}
return data;
}
It all works fine, and I can print the byte array length, byte array (ciphertext), and then decrypt the byte array and print out the original plaintext I sent, but immediately after it prints in the console, the program crashes with a OutOfMemoryError: Java heap space.
I have read this is usually because of not flushing the DataOutputStream, but I am calling it inside the sendBytes method so it should clear it after every array is sent.
The compiler is telling me the error is occuring inside readBytes on the line byte[] data = new byte[len]; and also where I call readBytes() in the main method.
Any help will be greatly appreciated!
Edit
I am actually getting some unexpected results.
17:50:14 Server waiting for Clients on port 1500.
Thread trying to create Object Input/Output Streams
17:50:16 Client[0.7757499147242042] just connected.
17:50:16 Server waiting for Clients on port 1500.
Byte array length: 16
Server recieved ciphertext: 27 10 -49 -83 127 127 84 -81 48 -85 -57 -38 -13 -126 -88 6
Server decrypted ciphertext to: asd
17:50:19 Client[0.7757499147242042]
Byte array length: 1946157921
I am calling readBytes() in a while loop, so the server will be listening for anything being transmitted over the socket. I guess its trying to run it a second time even though nothing else has been sent and the len variable is somehow being set to 1946157921. What logic could be behind this?
You must be sending something else over the socket; not reading it the same way you wrote it; and so getting out of sync. The effect will be that you're reading a length it that isn't a real length; is too big; and runs out of memory when you try to allocate it. The fault isn't in this code. Except of course that if len == 0 you shouldn't allocate the bye array when reading.
I have read this is usually because of not flushing the DataOutputStream
It isn't.
len variable is somehow being set to 1946157921.
Exactly as predicted. QED
You are running out of the available heap. Quick solution for this would be increasing (or specifying is missing) the -Xmx parameter in your JVM startup parameters to the level where the application is able to complete the task at hand.
Run your application with -Xms1500m in console, in Netbeans you can find it in project properties->Run->VM options.
I faced this out of memory problem today and after tweaking sometime with Xms I was able to fix the problem. Check if it work with you, if there is something really bigger then this than you will have to check how you can improve your code.
Check discussion here
Currently, I am relying on the ObjectInputStream.available() method to tell me how many bytes are left in a stream. Reason for this -- I am writing some unit/integration tests on certain functions that deal with streams and I am just trying to ensure that the available() method returns 0 after I am done.
Unfortunately, upon testing for failure (i.e., I have sent about 8 bytes down the stream) my assertion for available() == 0 is coming up true when it should be false. It should show >0 or 8 bytes!
I know that the available() method is classically unreliable, but I figured it would show something at least > 0!
Is there a more reliable way of checking if a stream is empty or not (The is my main goal here after all)? Perhaps in the Apache IO domain or some other library out there?
Does anyone know why the available() method is so profoundly unreliable; what is the point of it? Or, is there a specific, proper way of using it?
Update:
So, as many of you can read from the comments, the main issue I am facing is that on one end of a stream, I am sending a certain number of bytes but on the other end, not all the bytes are arriving!
Specifically, I am sending 205498 bytes on one end and only getting 204988 on the other, consistently. I am controlling both sides of this operation between threads in a socket, but it should be no matter.
Here is the code I have written to collect all the bytes.
public static int copyStream(InputStream readFrom, OutputStream writeTo, int bytesToRead)
throws IOException {
int bytesReadTotal = 0, bytesRead = 0, countTries = 0, available = 0, bufferSize = 1024 * 4;
byte[] buffer = new byte[bufferSize];
while (bytesReadTotal < bytesToRead) {
if (bytesToRead - bytesReadTotal < bufferSize)
buffer = new byte[bytesToRead - bytesReadTotal];
if (0 < (available = readFrom.available())) {
bytesReadTotal += (bytesRead = readFrom.read(buffer));
writeTo.write(buffer, 0, bytesRead);
countTries = 0;
} else if (countTries < 1000)
try {
countTries++;
Thread.sleep(1L);
} catch (InterruptedException ignore) {}
else
break;
}
return bytesReadTotal;
}
I put the countTries variable in there just to see what happens. Even without countTires in there, it will block forever before it reaches the BytesToRead.
What would cause the stream to suddenly block indefinitely like that? I know on the other end it fully sends the bytes over (as it actually utilizes the same method and I see that it completes the function with the full BytesToRead matching bytesReadTotal in the end. But the receiver doesn't. In fact, when I look at the arrays, they match up perfectly up till the end as well.
UPDATE2
I noticed that when I added a writeTo.flush() at the end of my copyStream method, it seems to work again. Hmm.. Why are flushes so vital in this situation. I.e., why would not using it cause a stream to perma-block?
The available() method only returns how many bytes can be read without blocking (which may be 0). In order to see if there are any bytes left in the stream, you have to read() or read(byte[]) which will return the number of bytes read. If the return value is -1 then you have reached the end of file.
This little code snippet will loop through an InputStream until it gets to the end (read() returns -1). I don't think it can ever return 0 because it should block until it can either read 1 byte or discover there is nothing left to read (and therefore return -1)
int currentBytesRead=0;
int totalBytesRead=0;
byte[] buf = new byte[1024];
while((currentBytesRead =in.read(buf))>0){
totalBytesRead+=currentBytesRead;
}
I need to encrypt and send data over TCP (from a few 100 bytes to a few 100 megabytes per message) in chunks from Java to a C++ program, and need to send the size of the data ahead of time so the recipient knows when to stop reading the current message and process it, then wait for the next message (the connection stays open so there's no other way to indicate end of message; as the data can be binary, I can't use a flag to indicate message end due to the possibility the encrypted bytes might randomly happen to be identical to any flag I choose at some point).
My issue is calculating the encrypted message size before encrypting it, which will in general be different than the input length due to padding etc.
Say I have initialized as follows:
AlgorithmParameterSpec paramSpec = new IvParameterSpec(initv);
encipher = Cipher.getInstance("AES/CBC/PKCS5Padding");
mac = Mac.getInstance("HmacSHA512");
encipher.init(Cipher.ENCRYPT_MODE, key, paramSpec);
mac.init(key);
buf = new byte[encipher.getOutputSize(blockSize)];
Then I send the data as such (and also have an analogous function that uses a stream for input instead of byte[]):
public void writeBytes(DataOutputStream out, byte[] input) {
try {
//mac.reset(); // Needed ?
int left = input.length;
int offset = 0;
while (left > 0)
{
int chunk = Math.min(left, blockSize);
int ctLength = encipher.update(input, offset, chunk, buf, 0);
mac.update(input, offset, chunk);
out.write(buf, 0, ctLength);
left -= chunk;
offset += chunk;
}
out.write(encipher.doFinal(mac.doFinal());
out.flush();
} catch (Exception e) {
e.printStackTrace();
}
}
But how to precalculate the output size that will be sent to the receiving computer?
Basically, I want to out.writeInt(messageSize) before the loop. But how to calculate messageSize? The documentation for Cipher's getOutputSize() says that "This call takes into account any unprocessed (buffered) data from a previous update call, and padding." So this seems to imply that the value might change for the same function argument over multiple calls to update() or doFinal()... Can I assume that if blockSize is a multiple of the AES CBC block size to avoid padding, I should have a constant value for each block? That is, simply check that _blockSize % encipher.getOutputSize(1) != 0 and then in the write function,
int messageSize = (input.length / blockSize) * encipher.getOutputSize(blockSize) +
encipher.getOutputSize(input.length % blockSize + mac.getMacLength());
??
If not, what alternatives do I have?
When using PKCS5 padding, the size of the message after padding will be:
padded_size = original_size + BLOCKSIZE - (original_size % BLOCKSIZE);
The above gives the complete size of the entire message (up to the doFinal() call), given the complete size of the input message. It occurs to me that you actually want to know just the length of the final portion - all you need to do is store the output byte array of the doFinal() call, and use the .length() method on that array.