If a write(ByteBuffer) completes, does it mean that the other side has received the data. Would TCP ensure that the data would reach other side?
No. It is only sure, that the data was written into the local socket buffer. But you cant assure that it will also be transmitted (network failure...).
No.
TCP will make its best endeavour. You are guaranteed that if it arrives it will be intact and in the correct order up to the point of the final correctly executing receive.
Related
How can I be sure that data is successfully delivered to the other end in socket programming?
outStream.write() doesn't guarantee that bytes are received on the other end. I can force server to send back some confirmation data, but how long should client wait for it? If I wait too short, maybe data is delivered to the server just when I throw timeout exception in client (which then shows error dialog, but server actually received data). On the other hand, I don't want to wait too much.
Should client wait some time and if confirmation is received, a third "commit" message is sent to server which then supplies data for further processing (so first client writes, then server replies and then client confirms). But then again, if the commit message is not received on server, client thinks that data is successfully sent but server will ignore it after some time, because it didn't receive commit message. And so on, bouncing never ends...
How is this situation generally handled?
Every tutorial that I read is just about creating/closing sockets, and sending data on client side and receiving it on server side.
If you have links to blogs which explain this problem (or even books), that would be good too.
[EDIT]
I should clarify some things. I'm using Java for client and server, and later I will create C# client. Everything is working perfectly for now. Both client and server are on the same LAN and I have never had any real problems. Scenario explained above is just theoretical, because I would like to cover as much as possible, including error handling.
I know TCP guarantees delivery, but in Java, out.write() doesn't block until underlying TCP delivers or fails and then continues execution or throws an exception. It just continues execution and I don't know if sending failed or not. There is no callback function. I'm starting with socket programming so maybe there is very simple solution which I don't know about. All I need to do is to make sure client knows that server received the message (if that is even possible).
If you have this kind of extreme need for reliability, you need to build that into your application and protocol. One way I have done that in the past is as follows.
Say you have a stream of "objects" (objects here defined in whatever way makes sense to your application) that need to be communicated from client C to server S. Associate a unique identifier with each object on the client side. Then have C send each object along with its identifier to S. But have C keep its copy of the object for now (in memory, or on disk, or whatever makes sense).
For each object S receives, it stores the object together with its unique identifier in its own local data store, and sends back an acknowledgment to C that it received the object (using the identifier to communicate that). C can now delete that object from its data store (strictly speaking it can delete all the ones it sent prior to that object as well -- since TCP guarantees sequenced delivery -- but that slightly complicates things).
This process can continue indefinitely and C never needs to explicitly wait for a confirmation for any one object. It simply maintains a local copy of each object. As long as the connection stays up, S will continually acknowledge every object it has received.
If the connection is broken for any reason, C assumes that S has not received any object it sent since the most recently received acknowledgment. When the connection is re-established, C may therefore resend a few objects that S previously received but since S stored the unique identifier along with each object, it simply acknowledges again that it received the object.
If S hangs for some reason, then eventually buffers between client and server will fill up and C's send will block. The client may need to be prepared for this eventuality.
At the end of the stream of objects -- if there is an end -- C will need to wait for the last object to be acknowledged. There's simply no way around that, and so you will need to decide how long it's appropriate to wait before C gives up and declares an error.
(Of course, this is all essentially duplicating at the application layer what TCP is doing at the transport layer: acknowledging what was actually received with the ability for the sender to re-transmit anything that was lost.)
TCP:
TCP guarantees packet delivery at layer 4 of the OSI Model. TCP is based on a handshake in which the receiving party must confirm the packet's delivery. In that case there is either something wrong in your code or your network is malfunctioning. If you are talking about the packet not making it to its destination, make sure you have properly bound the TCP server to the port, and that the destination is correct. While waiting for a packets arrival, make sure you have a receive timeout in place in order to prevent you application from getting hung on the receive.
I'm programming an udp server. Right now, when it is necessary from client code to send data, every thread representing a "connection" sends a datagram to a blocking queue, and the server thread then reads every datagram and sends it.
Peeking into DatagramSocket.send i see it synchronizes over the datagrampacket, but i cannot tell if at the end of the day would be more performance-wise to queue everything vs directly sending it. With the latter i suspect i could use direct bytebuffers.
So my question is: Would it be more wise in terms of performance to queue everything or directly send it?
Just send it directly. The socket send buffer already is a queue. The complication of another queue and another thread adds no value at all. Just another thing to go wrong.
I'm building a UDP client that can communicate with a selection of different servers. Given that an NIO application involves using a single receive thread, how can I dispatch incoming datagrams to the correct part of my application? i.e. associate incoming packets with the outgoing packets.
In theory, when sending (or connecting?) to a server, it should be possible to get the source ip/port in the outgoing Datagram and then recognise incoming packets as their responses by inspecting the destination ip/port. (because: http://www.dcs.bbk.ac.uk/~ptw/teaching/IWT/transport-layer/source-destination.gif)
Most UDP client examples seem to assume a single server, so that identifying incoming datagrams as responses to outgoing datagrams is trivial, for example:
ByteBuffer textToEcho = ByteBuffer.wrap("blah");
ByteBuffer echoedText = ByteBuffer.allocateDirect(MAX_PACKET_SIZE);
DatagramChannel datagramChannel = DatagramChannel.open(StandardProtocolFamily.INET)
datagramChannel.connect(new InetSocketAddress(REMOTE_IP, REMOTE_PORT));
while(true)
{
int sent = datagramChannel.write(textToEcho);
datagramChannel.read(echoedText);
}
Perhaps I could use multiple DatagramChannels and iteratively call read() on each, dispatching data to the appropriate to wherever my application is expecting responses?
If you're dead-set on using just one channel (and one bound local port), you need to avoid using the connect and write methods. Instead, use the send method.
Looping through your servers, use the send method for each server. You may need to rewind() your byte buffer after each send... I'm not sure if send clones the buffer or not.
When all servers have been sent: In a loop, as long as there are servers that haven't responded, use receive to get both the data returned (buffer argument), and the server that returned it (the method return value). Keep looping until the server list is exhausted, but you want to put a time-limit on the loop itself (for dead servers or lost packets)
Ideally in the receive loop, you want the receive method to block for a short period of time before timing out. If you can't find a way to configure blocking, you could use non-blocking, and put a Thread.sleep in your loop instead. Try to get timed blocking working though, that's the best way.
You should open a separate datagram channel to each server with which you wish to communicate, and hand-off that channel's management (reading/writing) to a separate thread.
I sometimes receive already received packets (I used sniffer and system ACKs them). Now I read all data (until socket timeout) and then send new request, but this is ugly. I was thinking about using sequence numbers but i didn't find it in Socket interface. Any clues?
No you don't. If the receiving TCP stack misses a packet, it will re-request it, but it can't have delivered the original one to you, because it missed it. And if it gets a packet it has already received, it will drop it.
TCP will deliver all the bytes that are sent, in the order they are sent. Nothing else (well, except some edge cases around disconnects).
Something else is going on.
EDIT:
To be clear, I'm talking about the bytes that are delivered to your application through the socket's InputStream. What happens on the wire is largely irrelevant unless you have some horrific network retransmission problem that you're trying to investigate. And if the receiving stack does get a duplicate packet, it will ACK it, because if it didn't then the sender would re-send it... again.
It sounds like you're trying to account for things that TCP already takes care of. It has sequence numbers built in and will take care of any lost data for you, and from the receiving point you should be waiting until you receive all your expected data, rather than reissuing a request. If you don't want to wait for a response to complete before issuing a new request, consider pipe-lining requests with multiple connections.
When using SocketChannel, you need to retain read and write buffers to handle partial writes and reads.
I have a nagging suspicion that it might not be needed when using a DatagramChannel, but info is scarce.
What is the story?
Should I call (non-blocking) receive(ByteBuffer) repeatedly until I get a null back to read all waiting datagrams?
When sending in non-blocking mode, can I rely on send(ByteBuffer, SocketAddress) to either send the the whole buffer or rejecting it entirely, or do I need to possibly keep partially written buffers?
Every read of a Datagram is the entire datagram, nothing more, nothing less. There's a hint that this is the case in the description of java.nio.DatagramChannel.read:
If there are more bytes in the
datagram than remain in the given
buffers then the remainder of the
datagram is silently discarded
When you're dealing with a SocketChannel, it's a message stream; there's no guarantee how much or how little data you'll get on each read, as TCP is reassembling separate packets to recreate the message from the other side. But for UDP (which is what you're reading with the DatagramChannel) each packet is its own atomic message.