With the advent of NIO most socket types could be "selectable" through the SelectableChannel implementation. Unfortunately the DatagramChannel does not support multicast prior to java 7. Multicast is supported in prior versions via the MulticastSocket class.
I want some way to be able to detect that there are pending messages (i.e. readable) messages on a multicast datagram socket. I would like to read until there are no remaining datagrams within the immediate time window. Having received all pending messages, then want to invoke a callback, but not individually or prior to having read all pending messages.
Making this simpler, let's assume one socket. In pseudo code:
List<Msg> received = new ArrayList<Msg>
while (true)
{
received.clear();
// initial blocking receive
data = receive_blocking (socket, datagram)
received.add (new Msg(data));
// flush out remaining messages
for (boolean receiving = true ; receiving ; )
{
// non-blocking
if (receive_nonblocking (socket, datagram))
received.add (new Msg(datagram));
else
receiving = false;
}
callback (received);
}
The question is how to implement receive_nonblocking without NIO 2. I do not need the Selector mechanism, but wondering whether there is some way I can do a non-blocking read(s) or otherwise detect whether there is something pending.
I had read that to use the selector, the channels must be created directly as in new DatagramChannel(), rather than acquiring a channel after socket creation. So if am correct, could not use the socket.getChannel() to create a selector post socket creation.
Is there any way to do this that doesn't involve JNI or timers, pre java 7?
Just set a very short read timeout, and catch SocketTimeoutException, which will be thrown when it expires, and break out of your reading loop.
Related
I am trying to simulate UDP using Java. I am sending a file from one host to another. This is the part of the receiver:
server.setSoTimeout(10000);
while (true)
{
try
{
DatagramPacket received = new DatagramPacket(receivedData,receivedData.length);
server.receive(received);
out.write(received.getData());
}
catch (IOException e) {
break;
}
}
server.close();
This solution works, but I am not satisfied with it for some reason.
Sender sends all the packets and then it closes the DatagramSocket. Receiver gets all the packets and it terminates, but it terminates because of the timeout.
So if switch on my receiver and don't execute anything for 10 secs, my Receiver shuts off, so nothing is transmitted.
Is there a way of terminating the loop without specifying the timeout?
I was also wondering if there is a method for the other host to establish connection - something like ServerSocket.accept(), which basically waits for the other host to connect.But, I decided to use DatagramSocket and I can't find a solution to this issue.
Does anybody know of a method that would perform this?
No.
Datagram (UDP) sockets are inherently connectionless. Closing a DatagramSocket does not have any effect which is visible to a remote system. It prevents an application from sending or receiving any further data on that socket, and frees up the port for use by other applications on the local system, but it does not cause any notification to be sent over the network.
If you want to notify the remote server that you are done sending data, you will need to send them a datagram notifying them of that.
If you are trying to transfer a file over UDP, keep in mind that UDP packets are not guaranteed to be received, nor are they guaranteed to be received in the same order they are transmitted! (That is, they may be dropped or reordered by the network.)
I am running into some issues with the Java socket API. I am trying to display the number of players currently connected to my game. It is easy to determine when a player has connected. However, it seems unnecessarily difficult to determine when a player has disconnected using the socket API.
Calling isConnected() on a socket that has been disconnected remotely always seems to return true. Similarly, calling isClosed() on a socket that has been closed remotely always seems to return false. I have read that to actually determine whether or not a socket has been closed, data must be written to the output stream and an exception must be caught. This seems like a really unclean way to handle this situation. We would just constantly have to spam a garbage message over the network to ever know when a socket had closed.
Is there any other solution?
There is no TCP API that will tell you the current state of the connection. isConnected() and isClosed() tell you the current state of your socket. Not the same thing.
isConnected() tells you whether you have connected this socket. You have, so it returns true.
isClosed() tells you whether you have closed this socket. Until you have, it returns false.
If the peer has closed the connection in an orderly way
read() returns -1
readLine() returns null
readXXX() throws EOFException for any other XXX.
A write will throw an IOException: 'connection reset by peer', eventually, subject to buffering delays.
If the connection has dropped for any other reason, a write will throw an IOException, eventually, as above, and a read may do the same thing.
If the peer is still connected but not using the connection, a read timeout can be used.
Contrary to what you may read elsewhere, ClosedChannelException doesn't tell you this. [Neither does SocketException: socket closed.] It only tells you that you closed the channel, and then continued to use it. In other words, a programming error on your part. It does not indicate a closed connection.
As a result of some experiments with Java 7 on Windows XP it also appears that if:
you're selecting on OP_READ
select() returns a value of greater than zero
the associated SelectionKey is already invalid (key.isValid() == false)
it means the peer has reset the connection. However this may be peculiar to either the JRE version or platform.
It is general practice in various messaging protocols to keep heartbeating each other (keep sending ping packets) the packet does not need to be very large. The probing mechanism will allow you to detect the disconnected client even before TCP figures it out in general (TCP timeout is far higher) Send a probe and wait for say 5 seconds for a reply, if you do not see reply for say 2-3 subsequent probes, your player is disconnected.
Also, related question
I see the other answer just posted, but I think you are interactive with clients playing your game, so I may pose another approach (while BufferedReader is definitely valid in some cases).
If you wanted to... you could delegate the "registration" responsibility to the client. I.e. you would have a collection of connected users with a timestamp on the last message received from each... if a client times out, you would force a re-registration of the client, but that leads to the quote and idea below.
I have read that to actually determine whether or not a socket has
been closed data must be written to the output stream and an exception
must be caught. This seems like a really unclean way to handle this
situation.
If your Java code did not close/disconnect the Socket, then how else would you be notified that the remote host closed your connection? Ultimately, your try/catch is doing roughly the same thing that a poller listening for events on the ACTUAL socket would be doing. Consider the following:
your local system could close your socket without notifying you... that is just the implementation of Socket (i.e. it doesn't poll the hardware/driver/firmware/whatever for state change).
new Socket(Proxy p)... there are multiple parties (6 endpoints really) that could be closing the connection on you...
I think one of the features of the abstracted languages is that you are abstracted from the minutia. Think of the using keyword in C# (try/finally) for SqlConnection s or whatever... it's just the cost of doing business... I think that try/catch/finally is the accepted and necesary pattern for Socket use.
I faced similar problem. In my case client must send data periodically. I hope you have same requirement. Then I set SO_TIMEOUT socket.setSoTimeout(1000 * 60 * 5); which is throw java.net.SocketTimeoutException when specified time is expired. Then I can detect dead client easily.
I think this is nature of tcp connections, in that standards it takes about 6 minutes of silence in transmission before we conclude that out connection is gone!
So I don`t think you can find an exact solution for this problem. Maybe the better way is to write some handy code to guess when server should suppose a user connection is closed.
As #user207421 say there is no way to know the current state of the connection because of the TCP/IP Protocol Architecture Model. So the server has to notice you before closing the connection or you check it by yourself.
This is a simple example that shows how to know the socket is closed by the server:
sockAdr = new InetSocketAddress(SERVER_HOSTNAME, SERVER_PORT);
socket = new Socket();
timeout = 5000;
socket.connect(sockAdr, timeout);
reader = new BufferedReader(new InputStreamReader(socket.getInputStream());
while ((data = reader.readLine())!=null)
log.e(TAG, "received -> " + data);
log.e(TAG, "Socket closed !");
Here you are another general solution for any data type.
int offset = 0;
byte[] buffer = new byte[8192];
try {
do {
int b = inputStream.read();
if (b == -1)
break;
buffer[offset++] = (byte) b;
//check offset with buffer length and reallocate array if needed
} while (inputStream.available() > 0);
} catch (SocketException e) {
//connection was lost
}
//process buffer
Thats how I handle it
while(true) {
if((receiveMessage = receiveRead.readLine()) != null ) {
System.out.println("first message same :"+receiveMessage);
System.out.println(receiveMessage);
}
else if(receiveRead.readLine()==null)
{
System.out.println("Client has disconected: "+sock.isClosed());
System.exit(1);
} }
if the result.code == null
On Linux when write()ing into a socket which the other side, unknown to you, closed will provoke a SIGPIPE signal/exception however you want to call it. However if you don't want to be caught out by the SIGPIPE you can use send() with the flag MSG_NOSIGNAL. The send() call will return with -1 and in this case you can check errno which will tell you that you tried to write a broken pipe (in this case a socket) with the value EPIPE which according to errno.h is equivalent to 32. As a reaction to the EPIPE you could double back and try to reopen the socket and try to send your information again.
I have working code that uses non-blocking IO to fetch UDP packets like this:
DatagramChannel channel = DatagramChannel.open();
channel.socket().bind(new InetSocketAddress(AUDIO_PORT));
channel.configureBlocking(false);
while(true){
ByteBuffer packet = ByteBuffer.allocate(MAX_PACKET);
if(channel.receive(packet) != null){
//Got something!
...
}
...
}
That works perfectly.
Now I'm trying to do exactly the same, only this time I want to use selectors, like this:
//Create a datagram channel, bind it to port, configure non-blocking:
DatagramChannel channel = DatagramChannel.open();
channel.socket().bind(new InetSocketAddress(AUDIO_PORT));
channel.configureBlocking(false);
//Create a selector and register it:
Selector selector = Selector.open();
channel.register(selector, SelectionKey.OP_READ);
//Spin
while(true){
//If there's a packet available, fetch it:
if(selector.selectNow() >= 1){
//**CODE NEVER REACHES THIS POINT**
ByteBuffer packet = ByteBuffer.allocate(MAX_PACKET);
channel.receive(packet);
...
}
...
}
Due to the application I'm making, I really need it to be non-blocking IO (even though it looks like I'm just spinning in my example), and blocking with a short timeout just won't work. I also really have to use Selectors. Issue is, even though I have a server actively sending packets to the device's AUDIO_PORT port, the select() operation always returns 0. I know the server app is doing its work since the first snippet works fine. Am I setting up the Selector wrong? I'm guessing I'm missing some step, but I just can't figure it out.
As I believe we have discussed previously in other threads, if the first code works and the second doesn't, Selectors must be broken in Android. Your code is correct (as long as you clear the selected key set of the Selector every time you get a non-zero return). You can verify that by running it on a Java platform.
You might consider changing select() == 1 to select() > 0 for greater generality, and then looping around the selected-key set as you will see in all the examples, but it shouldn't affect the correctness of this code.
As I also believe we have discussed, you could try blocking mode with a short read timeout instead of the Selector.
NB You aren't spinning, you are blocking forever in select().
it's not my first time trying to understand this issue but i hope it will be the last one:
some background:
i have a Java SocketChannel NIO server working in non-blocking mode.
this server has multiple clients which send and receive messages from it.
each client maintain its connection to the server with "keepalive" messages every once in a while.
The main idea with the server is that the clients will remain connect "all the time" and receive messages from it in "push" mode.
now to my question:
in Java NIO read() function - when the read() return -1 - it means that its EOS.
in the question i've asked here i realized that it means that the socket has finished its current stream and doesn't need to be closed..
when searching in google a bit more about this i found out that it does mean that the connection is closed on the other side..
what does the word "stream" exactly means? is it the current message being sent from the client? is it the ability of the client side connection to send anymore messages ?
why would a SocketChannel be closed on the client side if the client never told him to be closed ?
what is the difference between read() return -1 and connection reset by peer I/O error ?
this is how i read from SocketChannel:
private JSONObject readIncomingData(SocketChannel socketChannel)
throws JSONException, InvalidKeyException, IllegalBlockSizeException, BadPaddingException, IOException {
JSONObject returnObject = null;
ByteBuffer buffer = ByteBuffer.allocate(1024);
Charset charset = Charset.forName("UTF-8");
String endOfMesesage = "\"}";
String message = "";
StringBuilder input = new StringBuilder();
boolean continueReading = true;
while (continueReading && socketChannel.isOpen())
{
buffer.clear();
int bytesRead = socketChannel.read(buffer);
if (bytesRead == -1)
{
continueReading = false;
continue;
}
buffer.flip();
input.append(charset.decode(buffer));
message = input.toString();
if (message.contains(endOfMesesage))
continueReading = false;
}
if (input.length() > 0 && message.contains(endOfMesesage))
{
JSONObject messageJson = new JSONObject(input.toString());
returnObject = new JSONObject(encrypter.decrypt(messageJson.getString("m")));
}
return returnObject;
}
What does the word "stream" exactly means? is it the current message being sent from the client? is it the ability of the client side connection to send anymore messages ?
The stream means the data that is flowing between two locations, usually between the client and the server but effectively it's any kind of data flowing. E.g. if you read a file from your hard disc you use a FileInputStream which represents data flowing from the file on disc to your program. It's a very generic concept. Think of it as a river where the water is the data. Plus it's a very cool kind of river which allows you to control how the water/data is flowing.
Why would a SocketChannel be closed on the client side if the client never told him to be closed ?
That can happen if the connection between client and server is reset or interrupted. Your program should never assume that connections just live and are never interrupted. Connections are interrupted for all kinds of reasons, may it be a flaky network component, someone pulling a plug that should better be left where it was or the wireless network is going down. Also the server might close the connection, e.g. if the server program goes down, has a bug or the connection runs into a timeout. Always remember that open connections are a limited resource so servers might decide to close them if they are idle for too long.
What is the difference between read() return -1 and connection reset by peer I/O error ?
When the read() returns -1 this simply means that there is currently no more data in the stream. A connection reset means, there was probably more data, but the connection no longer exists and therefore this data cannot be read anymore. Again taking the river anology: Think of the data as some quantity of water being sent from a village upstream (aka Serverville) to a village downstream (aka Clientville) using a riverbed that connects the two villages (the connection). Now someone at Serverville pulls the big lever and the water (the data) flows down from Serverville to Clientville. After Serverville has sent all the water it wanted to send, it closes the lever and the riverbed will be empty again (and actually destroyed as the connection got closed). This is where Clientville get's the -1. Now imagine some bulldozer interrupting the riverbed and some of the water never makes it to Clientville. This is the "connection reset" situation.
Hope this helps :)
what does the word "stream" exactly means? is it the current message being sent from the client?
It is a stream of bytes, not messages. You can use those bytes to form a message but the stream has no idea you are doing this, nor does it support messages in any way.
why would a SocketChannel be closed on the client side if the client never told him to be closed ?
It can only be closed with a -1 if the other end closed it.
what is the difference between read() return -1 and connection reset by peer I/O error ?
You can close or drop a connection other ways such as closing it from the same side, or a timeout in the connection e.g.you pulled out the network cable.
BTW: The way you have written the code is better suited to blocking NIO. For example, if you receive more than one whole message, anything after the first one is discarded. If you use blocking IO and keep everything you read you will not get corrupted or dropped messages.
What does the word "stream" exactly means? is it the current message being sent from the client?
It basically means one side of the connection, which is full-duplex. TCP is a byte-stream protocol, providing two independent byte streams, one in each direction.
Why would a SocketChannel be closed on the client side if the client never told him to be closed?
It wouldn't. The client did close the connection. That's what read() returning -1 means.
What is the difference between read() return -1 and connection reset by peer I/O error ?
read() returning -1 means the peer closed the connection properly. 'Connection reset by peer' indicates a protocol error of some kind, usually that you have written data to a connection that had already been closed by the peer.
Re your code, if read() returns -1 you must close the channel. There is no other sensible way to proceed.
Is the code below sufficient to accept concurrent UDP transmissions? More specifically, if 2 clients transmit concurrently, will DatagramSocket queue up the transmissions and deliver them one by one as I call receive(), or will only one make it through?
DatagramSocket socket = new DatagramSocket(port, address);
byte[] buffer = new byte[8192];
while(!disconnect){
DatagramPacket p = new DatagramPacket(buffer, buffer.length);
socket.receive(p);
}
There is no queuing by default. The client may retry till timeout or similiar are reach.
UDP is quiet fast but on heavy load you may have clients that cannot connect.
If the packets make it to your networking interface (imagine lost packets on a congested wireless channel) they will passed up and the blocking method socket.receive(p) will be called. If there is a collision of packets on the channel because of two clients transmitting at the same time you will not get any of the two packets. But this is most likely not possible because the access technology of networking interfaces will take care of this, check
CSMA/CA or CSMA/CD
After calling socket.receive(p) you should create a new thread to process the packet itself. That will make sure that the next packet can be received on the socket.
EDIT:
Description of INTEL's TX and RX descriptors
A basic solution would have on thread responsible for handling a number of incoming requests (with your desired limit) and then handing them off to other worker/request handler threads. This basic structure is very much the same with most servers: a main thread responsible for handing off requests to worker threads. When each of these worker threads is finished, the you can update a shared/global counter to let the main thread know that it can establish a new connection. This will require synchronization, but it's a neat and simple abstraction.
Here's the idea:
Server Thread:
// Receive Packet
while (true) {
serverLock.acquire();
try {
if (numberOfRequests < MAX_REQUESTS) {
packet = socket.receive();
numberOfRequests++;
requestThread(packet).run();
} else {
serverMonitor.wait(serverLock);
}
} finally {
serverLock.release();
}
}
Request Thread:
// Handle Packet
serverLock.acquire();
try {
if (numberOfRequests == MAX_REQUESTS){
numberOfRequests--;
serverMonitor.pulse();
}
} finally {
serverLock.release();
}
This is just to give you an idea of what you can start out with. But when you get the hang of it, you'll be able to make optimizations and enhancements to make sure the synchronization is all correct.
One particular enhancement, which also lends itself to limited number of requests, is something called a ThreadPool.