EventLoop#submit() vs #execute() vs Channel#writeAndFlush() - java

What's the difference between the 3 methods when writing bytes to a channel?
In my case, the thread writing these bytes is not the thread that belongs to the channel's EventLoop, and I understand that IO events always happen on the channel's assigned EventLoop thread.
I am trying to minimize latency with getting these bytes flushed as soon as possible.
To better understand what I can do to optimize this, I need to know the difference between these 3 ways to write data to a channel, and possibly any other way I may have missed?
byte[] data = ...
Channel channel = ...
// 1
channel.eventLoop().submit(() -> channel.writeAndFlush(data));
// 2
channel.eventLoop().execute(() -> channel.writeAndFlush(data));
// 3
channel.writeAndFlush(data);

So for what you are doing here there isn't really much difference except in how the return value of writeAndFlush is propagated.

Related

How to properly implement a blocking, thread-safe write method for Java sockets?

I wrote a WebSocket server in Java.
This is the method that the server uses to send WebSocket packets to its clients:
private void sendFrame(boolean fin, boolean rsv1, boolean rsv2, boolean rsv3, WebSocketOpcode opcode, byte[] payloadData) throws IOException {
if (connection.isClosed() || webSocketConnectionClosing != null) return;
byte[] header = new byte[2];
if (fin) header[0] |= 1 << 7;
if (rsv1) header[0] |= 1 << 6;
if (rsv2) header[0] |= 1 << 5;
if (rsv3) header[0] |= 1 << 4;
header[0] |= opcode.get() & 0b1111;
header[1] |= payloadData.length < 126 ? payloadData.length : (payloadData.length <= 65535 ? 126 : 127);
out.write(header);
if (payloadData.length > 125) {
if (payloadData.length <= 65535) {
out.writeShort(payloadData.length);
} else {
out.writeLong(payloadData.length);
}
}
out.write(payloadData);
out.flush();
}
And this is how I declare the output stream after a client connects:
out = new DataOutputStream(new BufferedOutputStream(connection.getOutputStream()));
And I have some questions regarding this:
Is the above code thread-safe? What I mean by that is, can multiple threads call sendFrame() at the same time without the risk of packets data interleaving? It looks like this code is wrong, but I haven't encountered any interleaving yet.
If it isn't thread-safe, then how would I make it thread-safe in this form without the use of queues? (I want the sendFrame() method to be blocking until the data is actually sent)
If I wouldn't wrap the OutputStream in BufferedOutputStream, but only in DataOutputStream instead, would this make the .write() method atomic? Would it be thread-safe to pack the entire packet data into a single byte array and then call .write() once with that array?
Is the above code thread-safe? What I mean by that is, can multiple threads call sendFrame() at the same time without the risk of packets data interleaving?
It is not thread-safe.
It looks like this code is wrong, but I haven't encountered any interleaving yet.
The time window in which the interleaving could occur is very small. Probably less than a microsecond. That means the probability of it occurring is small. But not zero.
If it isn't thread-safe, then how would I make it thread-safe in this form without the use of queues? (I want the sendFrame() method to be blocking until the data is actually sent)
It depends on how the sendFrame method fits in with the rest of your code.
The approach I would used would be to ensure that all calls to sendFrame for a specific output stream and being made on the same target object. Then I would use synchronized to lock on the target object or a private log belonging to the target object.
An alternative would be to use synchronized and lock on out. However there is a risk that something else is doing that already, and sendFrame calls would be blocked unnecessarily.
If I wouldn't wrap the OutputStream in BufferedOutputStream, but only in DataOutputStream instead, would this make the .write() method atomic?
(That's beside the point. You have 3 write calls to contend with. However ....)
Would it be thread-safe to pack the entire packet data into a single byte array and then call .write() once with that array?
None of those classes are documented1 as thread-safe, or as guaranteeing that write operations are atomic. However, in OpenJDK Java 11 (at least), the relevant write methods are implemented as synchronized in BufferedOutputStream and DataOutputStream.
1 - If the javadocs don't specify thread-safety, etc characteristics, then those characteristics could vary depending on the Java version, etc.

Java NIO ByteBuffer, write after flip

I'm new to Java ByteBuffers and was wondering what the correct way to write to a ByteBuffer after it has been flipped.
In my use case, I am writing an outputBuffer to a socket:
outBuffer.flip();
//Non-blocking SocketChannel
int bytesWritten = getSocket().write(outBuffer);
After this, the output buffer has to be written to again. Also not all of the bytes in the outBuffer may have been written to the socket.
Since it is currently flipped, how can I make it writable again, without overriding any data if it is still in the buffer and wasn't written to the socket?
If I am right, outBuffer.position() == bytesWritten and limit should be at how much data there was to write.
So would using the following in order to reuse the output buffer be right? :
int limit = outBuffer.limit()
outBuffer.limit(outBuffer.capacity());
outBuffer.position(limit);
Again from the API spec.:
The following loop copies bytes from one channel to another via the buffer buf:
while (in.read(buf) >= 0 || buf.position != 0) {
buf.flip();
out.write(buf);
buf.compact(); // In case of partial write
}
since it is currently flipped
It will stay flipped. The write doesn't change that.
how can I make it writable again, without overriding any data if it is still in the buffer and wasn't written to the socket?
You don't have to do anything, but if you want to read before you write again you should do flip/write/compact. If you just want to repeat the write just call write() again, with the buffer still in its current state.
But I prefer to always keep these buffers ready for reading, so there is no possibility of a slip-up, and to flip/write/compact (or flip/get/compact) when those operations are necessary, atomically as it were.
Note that you should not use clear(), unless you are certain that the write was complete and the buffer is now empty. In that case compact and clear are equivalent. But it is simpler to just always compact.
If you're copying in blocking mode, use the loop quoted by #zlakad.

Strange behaviour arrayBlockingQueue with array elements

I am having some strange behavior with the use of an ArrayBlockingQueue which I use in order to communicate between certain treads in a java application.
I am using 1 static ArrayBlockingQueue as initialised like this:
protected static BlockingQueue<long[]> commandQueue;
Followed by the constructor which has this as one of its lines:
commandQueue = new ArrayBlockingQueue<long[]>(amountOfThreads*4);
Where amountOfThreads is given as a constructor argument.
I then have a producer that creates an array of long[2] gives it some values and then offers it to the queue, I then change one of the values of the array directly after it and offer it once again to the queue:
long[] temp = new long[2];
temp[0] = currentThread().getId();
temp[1] = gyrAddress;//Address of an i2c sensor
CommunicationThread.commandQueue.offer(temp);//CommunicationThread is where the commandqueue is located
temp[1] = axlAddress;//Change the address to a different sensor
CommunicationThread.commandQueue.offer(temp);
The consumer will then take this data and open up an i2c connection to a specific sensor, get some data from said sensor and communicate the data back using another queue.
For now however I have set the consumer to just consume the head and print the data.
long[] command = commandQueue.take();//This will hold the program until there is at least 1 command in the queue
if (command.length!=2){
throw new ArrayIndexOutOfBoundsException("The command given is of incorrect format");
}else{
System.out.println("The thread with thread id " + command[0] + " has given the command to get data from address " +Long.toHexString(command[1]));
}
Now for testing I have a producer thread with these addresses (byte) 0x34, (byte)0x44
If things are going correctly my output should be:
The thread with thread id 14 has given the command to get data from address 44
The thread with thread id 14 has given the command to get data from address 34
However I get:
The thread with thread id 14 has given the command to get data from address 34
The thread with thread id 14 has given the command to get data from address 34
Which would mean that it is sending the temp array after it has changed it.
Things that I did to try and fix it:
I tried a sleep, if I added a 150 ms sleep then the response is correct.
However this method will quite obviously affect performance...
Since the offer method returns a true I tried the following piece of code
boolean tempBool = false;
while(!tempBool){
tempBool = CommunicationThread.commandQueue.offer(temp);
System.out.println(tempBool);
}
Which prints out a true. This did not have an affect.
I tried printing temp[1] after this while loop and at that moment it is the correct value.(It prints out 44 however the consumer receives 34)
What most likely is the case is a syncronisation issue, however I thought that the point of a BlockingQueue based object would be to solve this.
Any help or suggestion on the workings of this BlockingQueue would be greatly appreciated. Let me end on a note that this is my first time working with queues in between threads in java and that the final program will be running on a raspberry pi using the pi4j library to communicate with the sensors
Since you asked about how BlockingQueue works exactly, let's start with that:
A blocking queue is a queue that blocks when you try to dequeue from it while the queue is empty, or when you try to enqueue items to it while the queue is already full. A thread trying to dequeue from an empty queue is blocked until some other thread inserts an item into the queue.
Soo these blocking queue's prevent different threads from reading/writing to a queue while it is not yet possible because it is either empty or full.
As Andy Turner and JB Nizet already explained, variables are statically shared in memory. This means that when your thread that reads the queue it finds a reference (A.K.A. a pointer) to this variable (in memory) and uses this pointer in it's following code. However before it manages to read this data, you already changed the variable, normally in non-threaded applications this wouldn't be an issue since only one thread will try to read from memory and it will always be executed chronologically. A way to circumvent this is to create a new variable/array (which will assign itself to new memory) with the variable data every time you add an entry to the queue, this way you make sure you do not overwrite a variable in memory before it is processed by the other thread. A simple way to do this is:
long[] tempGyr = new long[2];
tempGyr[0] = currentThread().getId();
tempGyr[1] = gyrAddress;
CommunicationThread.commandQueue.offer(tempGyr);//CommunicationThread is where the commandqueue is located
long[] tempAxl = new long[2];
tempAxl[0] = currentThread().getId();
tempAxl[1] = axlAddress;
CommunicationThread.commandQueue.offer(tempAxl);
Hope this explains the subject, if not: feel free to ask for additional questions :)

Java HttpURLConnection InputStream.close() hangs (or works too long?)

First, some background. There is a worker which expands/resolves bunch of short URLS:
http://t.co/example -> http://example.com
So, we just follow redirects. That's it. We don't read any data from the connection. Right after we got 200 we return the final URL and close InputStream.
Now, the problem itself. On a production server one of the resolver threads hangs inside the InputStream.close() call:
"ProcessShortUrlTask" prio=10 tid=0x00007f8810119000 nid=0x402b runnable [0x00007f882b044000]
java.lang.Thread.State: RUNNABLE
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.skip(BufferedInputStream.java:352)
- locked <0x0000000561293aa0> (a java.io.BufferedInputStream)
at sun.net.www.MeteredStream.skip(MeteredStream.java:134)
- locked <0x0000000561293a70> (a sun.net.www.http.KeepAliveStream)
at sun.net.www.http.KeepAliveStream.close(KeepAliveStream.java:76)
at java.io.FilterInputStream.close(FilterInputStream.java:155)
at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.close(HttpURLConnection.java:2735)
at ru.twitter.times.http.URLProcessor.resolve(URLProcessor.java:131)
at ru.twitter.times.http.URLProcessor.resolve(URLProcessor.java:55)
at ...
After a brief research, I understood that skip() is called to clean up the stream before sending it back to the connections pool (if keep-alive is set on?). Still I don't understand how to avoid this situation. Moreover, I doubt if there is some bad design in our code or there is problem in JDK.
So, the questions are:
Is it possible to avoid hanging on close()? Guarantee some reasonable
timeout, for example.
Is it possible to avoid reading data from connection at all?
Remember I just want the final URL. Actually, I think, I don't want
skip() to be called at all ...
Update:
KeepAliveStream, line 79, close() method:
// Skip past the data that's left in the Inputstream because
// some sort of error may have occurred.
// Do this ONLY if the skip won't block. The stream may have
// been closed at the beginning of a big file and we don't want
// to hang around for nothing. So if we can't skip without blocking
// we just close the socket and, therefore, terminate the keepAlive
// NOTE: Don't close super class
try {
if (expected > count) {
long nskip = (long) (expected - count);
if (nskip <= available()) {
long n = 0;
while (n < nskip) {
nskip = nskip - n;
n = skip(nskip);} ...
More and more it seems to me that there is a bug in JDK itself. Unfortunately, it's very hard to reproduce this ...
The implementation of KeepAliveStream that you have linked, violates the contract under which available() and skip() are guaranteed to be non-blocking and thus may indeed block.
The contract of available() guarantees a single non-blocking skip():
Returns an estimate of the number of bytes that can be read (or
skipped over) from this input stream without blocking by the next
caller of a method for this input stream. The next caller might be
the same thread or another thread. A single read or skip of this
many bytes will not block, but may read or skip fewer bytes.
Wheres the implementation calls skip() multiple times per single call to available():
if (nskip <= available()) {
long n = 0;
// The loop below can iterate several times,
// only the first call is guaranteed to be non-blocking.
while (n < nskip) {
nskip = nskip - n;
n = skip(nskip);
}
This doesn't prove that your application blocks because KeepAliveStream incorrectly uses InputStream. Some implementations of InputStream may possibly provide stronger non-blocking guarantees, but I think it is a very likely suspect.
EDIT: After a bit more research, this is a very recently fixed bug in JDK: https://bugs.openjdk.java.net/browse/JDK-8004863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel. The bug report says about an infinite loop, but a blocking skip() could also be a result. The fix seems to address both issues (there is only a single skip() per available())
I guess this skip() on close() is intended for Keep-Alive support.
See http://docs.oracle.com/javase/6/docs/technotes/guides/net/http-keepalive.html.
Prior to Java SE 6, if an application closes a HTTP InputStream when
more than a small amount of data remains to be read, then the
connection had to be closed, rather than being cached. Now in Java SE
6, the behavior is to read up to 512 Kbytes off the connection in a
background thread, thus allowing the connection to be reused. The
exact amount of data which may be read is configurable through the
http.KeepAlive.remainingData system property.
So keep alive can be effectively disabled with http.KeepAlive.remainingData=0 or http.keepAlive=false.
But this can negatively affect performance if you always address to the same http://t.co host.
As #artbristol suggested, using HEAD instead of GET seems to be the preferable solution here.
I was facing a similar issue when I was trying to make a "HEAD" request. To fix it, I removed the "HEAD" method because I just wanted to ping the url

Java NIO: transferFrom until end of stream

I'm playing around with the NIO library. I'm attempting to listen for a connection on port 8888 and once a connection is accepted, dump everything from that channel to somefile.
I know how to do it with ByteBuffers, but I'd like to get it working with the allegedly super efficient FileChannel.transferFrom.
This is what I got:
ServerSocketChannel ssChannel = ServerSocketChannel.open();
ssChannel.socket().bind(new InetSocketAddress(8888));
SocketChannel sChannel = ssChannel.accept();
FileChannel out = new FileOutputStream("somefile").getChannel();
while (... sChannel has not reached the end of the stream ...) <-- what to put here?
out.transferFrom(sChannel, out.position(), BUF_SIZE);
out.close();
So, my question is: How do I express "transferFrom some channel until end-of-stream is reached"?
Edit: Changed 1024 to BUF_SIZE, since the size of the buffer used, is irrelevant for the question.
There are few ways to handle the case. Some background info how trasnferTo/From is implemented internally and when it can be superior.
1st and foremost you should know how many bytes you have to xfer, i.e. use FileChannel.size() to determine the max available and sum the result. The case refers to FileChannel.trasnferTo(socketChanel)
The method does not return -1
The method is emulated on Windows. Windows doesn't have an API function to xfer from filedescriptor to socket, it does have one (two) to xfer from the file designated by name - but that's incompatible with java API.
On Linux the standard sendfile (or sendfile64) is used, on Solaris it's called sendfilev64.
in short for (long xferBytes=0; startPos + xferBytes<fchannel.size();) doXfer() will work for transfer from file -> socket.
There is no OS function that transfers from socket to file (which the OP is interested in). Since the socket data is not int he OS cache it can't be done so effectively, it's emulated. The best way to implement the copy is via standard loop using a polled direct ByteBuffer sized with the socket read buffer. Since I use only non-blocking IO that involves a selector as well.
That being said: I'd like to get it working with the allegedly super efficient "? - it is not efficient and it's emulated on all OSes, hence it will end up the transfer when the socket is closed gracefully or not. The function will not even throw the inherited IOException, provided there was ANY transfer (If the socket was readable and open).
I hope the answer is clear: the only interesting use of File.transferFrom happens when the source is a file. The most efficient (and interesting case) is file->socket and file->file is implemented via filechanel.map/unmap(!!).
Answering your question directly:
while( (count = socketChannel.read(this.readBuffer) ) >= 0) {
/// do something
}
But if this is what you do you do not use any benefits of non-blocking IO because you actually use it exactly as blocking IO. The point of non-blocking IO is that 1 network thread can serve several clients simultaneously: if there is nothing to read from one channel (i.e. count == 0) you can switch to other channel (that belongs to other client connection).
So, the loop should actually iterate different channels instead of reading from one channel until it is over.
Take a look on this tutorial: http://rox-xmlrpc.sourceforge.net/niotut/
I believe it will help you to understand the issue.
I'm not sure, but the JavaDoc says:
An attempt is made to read up to count bytes from the source channel
and write them to this channel's file starting at the given position.
An invocation of this method may or may not transfer all of the
requested bytes; whether or not it does so depends upon the natures
and states of the channels. Fewer than the requested number of bytes
will be transferred if the source channel has fewer than count bytes
remaining, or if the source channel is non-blocking and has fewer than
count bytes immediately available in its input buffer.
I think you may say that telling it to copy infinite bytes (of course not in a loop) will do the job:
out.transferFrom(sChannel, out.position(), Integer.MAX_VALUE);
So, I guess when the socket connection is closed, the state will get changed, which will stop the transferFrom method.
But as I already said: I'm not sure.
allegedly super efficient FileChannel.transferFrom.
If you want both the benefits of DMA access and nonblocking IO the best way is to memory-map the file and then just read from the socket into the memory mapped buffers.
But that requires that you preallocate the file.
This way:
URLConnection connection = new URL("target").openConnection();
File file = new File(connection.getURL().getPath().substring(1));
FileChannel download = new FileOutputStream(file).getChannel();
while(download.transferFrom(Channels.newChannel(connection.getInputStream()),
file.length(), 1024) > 0) {
//Some calculs to get current speed ;)
}
transferFrom() returns a count. Just keep calling it, advancing the position/offset, until it returns zero. But start with a much larger count than 1024, more like a megabyte or two, otherwise you're not getting much benefit from this method.
EDIT To address all the commentary below, the documentation says that "Fewer than the requested number of bytes will be transferred if the source channel has fewer than count bytes remaining, or if the source channel is non-blocking and has fewer than count bytes immediately available in its input buffer." So provided you are in blocking mode it won't return zero until there is nothing left in the source. So looping until it returns zero is valid.
EDIT 2
The transfer methods are certainly mis-designed. They should have been designed to return -1 at end of stream, like all the read() methods.
Building on top of what other people here have written, here's a simple helper method which accomplishes the goal:
public static void transferFully(FileChannel fileChannel, ReadableByteChannel sourceChannel, long totalSize) {
for (long bytesWritten = 0; bytesWritten < totalSize;) {
bytesWritten += fileChannel.transferFrom(sourceChannel, bytesWritten, totalSize - bytesWritten);
}
}

Categories