How can I be sure that data is successfully delivered to the other end in socket programming?
outStream.write() doesn't guarantee that bytes are received on the other end. I can force server to send back some confirmation data, but how long should client wait for it? If I wait too short, maybe data is delivered to the server just when I throw timeout exception in client (which then shows error dialog, but server actually received data). On the other hand, I don't want to wait too much.
Should client wait some time and if confirmation is received, a third "commit" message is sent to server which then supplies data for further processing (so first client writes, then server replies and then client confirms). But then again, if the commit message is not received on server, client thinks that data is successfully sent but server will ignore it after some time, because it didn't receive commit message. And so on, bouncing never ends...
How is this situation generally handled?
Every tutorial that I read is just about creating/closing sockets, and sending data on client side and receiving it on server side.
If you have links to blogs which explain this problem (or even books), that would be good too.
[EDIT]
I should clarify some things. I'm using Java for client and server, and later I will create C# client. Everything is working perfectly for now. Both client and server are on the same LAN and I have never had any real problems. Scenario explained above is just theoretical, because I would like to cover as much as possible, including error handling.
I know TCP guarantees delivery, but in Java, out.write() doesn't block until underlying TCP delivers or fails and then continues execution or throws an exception. It just continues execution and I don't know if sending failed or not. There is no callback function. I'm starting with socket programming so maybe there is very simple solution which I don't know about. All I need to do is to make sure client knows that server received the message (if that is even possible).
If you have this kind of extreme need for reliability, you need to build that into your application and protocol. One way I have done that in the past is as follows.
Say you have a stream of "objects" (objects here defined in whatever way makes sense to your application) that need to be communicated from client C to server S. Associate a unique identifier with each object on the client side. Then have C send each object along with its identifier to S. But have C keep its copy of the object for now (in memory, or on disk, or whatever makes sense).
For each object S receives, it stores the object together with its unique identifier in its own local data store, and sends back an acknowledgment to C that it received the object (using the identifier to communicate that). C can now delete that object from its data store (strictly speaking it can delete all the ones it sent prior to that object as well -- since TCP guarantees sequenced delivery -- but that slightly complicates things).
This process can continue indefinitely and C never needs to explicitly wait for a confirmation for any one object. It simply maintains a local copy of each object. As long as the connection stays up, S will continually acknowledge every object it has received.
If the connection is broken for any reason, C assumes that S has not received any object it sent since the most recently received acknowledgment. When the connection is re-established, C may therefore resend a few objects that S previously received but since S stored the unique identifier along with each object, it simply acknowledges again that it received the object.
If S hangs for some reason, then eventually buffers between client and server will fill up and C's send will block. The client may need to be prepared for this eventuality.
At the end of the stream of objects -- if there is an end -- C will need to wait for the last object to be acknowledged. There's simply no way around that, and so you will need to decide how long it's appropriate to wait before C gives up and declares an error.
(Of course, this is all essentially duplicating at the application layer what TCP is doing at the transport layer: acknowledging what was actually received with the ability for the sender to re-transmit anything that was lost.)
TCP:
TCP guarantees packet delivery at layer 4 of the OSI Model. TCP is based on a handshake in which the receiving party must confirm the packet's delivery. In that case there is either something wrong in your code or your network is malfunctioning. If you are talking about the packet not making it to its destination, make sure you have properly bound the TCP server to the port, and that the destination is correct. While waiting for a packets arrival, make sure you have a receive timeout in place in order to prevent you application from getting hung on the receive.
Related
I'm programming an udp server. Right now, when it is necessary from client code to send data, every thread representing a "connection" sends a datagram to a blocking queue, and the server thread then reads every datagram and sends it.
Peeking into DatagramSocket.send i see it synchronizes over the datagrampacket, but i cannot tell if at the end of the day would be more performance-wise to queue everything vs directly sending it. With the latter i suspect i could use direct bytebuffers.
So my question is: Would it be more wise in terms of performance to queue everything or directly send it?
Just send it directly. The socket send buffer already is a queue. The complication of another queue and another thread adds no value at all. Just another thing to go wrong.
I am learning socket and server/client model concept and having a hard time understanding the server client concept. If a client sends a request, can server sends more than one respond? Or we have to put everything in one respond?
For a memory game program, when a client click a card, the action will send a request to server in order to turn the card in every player's program, if the second card does not match, the server tells players wait 2 secs, turn the 2 cards back, and then assign turn to next player. Can a server does this in multiple responds or it has to do it in single respond? Since no client requests for some responds, so I don't know if it is achievable or not.
If you're talking about TCP connections, after the connection has established client and server are equivalent, both are free to send data as long and as much they like and/or shut down their end of the connection.
Edit: After several passes I think i have understood what the second paragraph of your question is aiming for.
There is, of course, nothing which would stop the server from doing anything.. What your server seems to do, most of the time, is blocking on a InputStream.read() operation. If you want the server to operate even when no network input happens, one solution might be to use a read timeout, or check the input stream for readability before actually reading.
This is not your complete answer.
For one request, you get one response back.
Please read on this information in wikipedia for the basics
"Request-response, also known as request-reply, is a message exchange pattern in which a requestor sends a request message to a replier system which receives and processes the request, ultimately returning a message in response. This is a simple, but powerful messaging pattern which allows two applications to have a two-way conversation with one another over a channel. This pattern is especially common in client-server architectures.1
For simplicity, this pattern is typically implemented in a purely synchronous fashion, as in web service calls over HTTP, which holds a connection open and waits until the response is delivered or the timeout period expires. However, request-response may also be implemented asynchronously, with a response being returned at some unknown later time. This is often referred to as "sync over async", or "sync/async", and is common in enterprise application integration (EAI) implementations where slow aggregations, time-intensive functions, or human workflow must be performed before a response can be constructed and delivered."
I've been thinking about this all day, i dont really think if the Title is the correct one but here it goes, let me explain my situation: Im working on a project, a server made in Java for clients made in Delphi. Conections are good, multiple clients with its own threads, i/o working good. The clients send Strings to the server which i read with BufferedReader. Depending on the reserved words the server receives, it makes an action. Before the client sends the string, it inserts information to a SQL Server database so the server can go and check it after getting the order/command via socket. The server obtains the information in the database, process it, and send it to... let's call it "The Dark Side".
At the moment that the transaction is done, and the info is sent to the dark side, the server inserts the information... cough cough, dark information into a database table so the client can go and take what it requested. BUT, i need to report that to the client! ("Yo, check again the database bro, what you want is there :3").
The conection, the socket is made in other class. Not the one that i want to use to answer to the client, so if i dont have the socket, i dont have the OutputStream, which i need to talk back. That class, the one processing and sending information to the dark side, is going to be working with hundred of transactions in group.
My Issue is here: I can't report to the client that is done because i dont have the sockets references in that class. I instance the clients thread like:
new Client(socket).start();
Objects without references variables, but, i have an option i can take: Store the Sockets and their ip's in a HashMap object at the moment that a new connection is made, like this:
sockets.put(newSocket.getInetAddress().getHostAddress(), newSocket);
Then i can get the socket(so i can get the OutputStream and answer) calling an static method like this:
public static Socket getSocket(String IP) {
Socket RequestedSocket;
RequestedSocket = sockets.get(IP);
return RequestedSocket;
}
But i want you to tell me if there is a better way of doing this, better than storing all of those sockets in a list/hashmap. How can i get those objects without reference variables ? Or maybe thats a good way of doing it and im just trying to overpass the limits.
P.S.: I tried to store the Client objects in the database, serializing them, but the sockets can't be serialized.
Thanks.
This is a design issue for you. You will need to keep track of them somewhere, one solution might be to simply create a singleton class [SocketMapManager] for instance that holds the hashmap, so that you can access it statically from other classes. http://www.javaworld.com/javaworld/jw-04-2003/jw-0425-designpatterns.html
Any solution that tells you to keep a reference to the socket/ connection/ stream is bad -> as that means your connections are going to be held up while the server does its work.
You have a couple of options open
1. have the clients act as servers too. when they connect, they give the server their IP, port and some secret string as part of the hand shake. This means you have control over client code to make this happen.
the servers have a protocol to either take new jobs or check status of old jobs. Client pools the server periodically.
clients connect to database or other application (web service or plain socket like the original app) that connects to data base to get the status of the job. Meaning server gives client a job id.
a socket is open then it one OS resource open. can read up Network Programming: to maintain sockets or not?
All depends on
1. how many client connect at a time/ in 5 minutes.
2. how many seconds/ minutes does one client's request take to process
if number of clients in 5 minutes is maximum (in next 3 years) 300 at a time/ in any 5 minute duration and each request takes at a max 50 seconds to process then a dedicated server with max 50,000 sockets should suffice. Else you need async or more servers (and a DNS/ web server/ port forwarding or other method for load balance)
I'm having a bit of a problem trying to understand what is the flow of the operations, and what exactly you have at disposition. Is this sequence correct?
1. client writes to database (delphi)
2. client writes to server (delphi)
3. server writes to database (java)
4. server writes to client (java)
5. client reads database (delphi)
And the problem is pass 4?
More important: you are saying that there isn't a socket in the Client class, and that you don't have a list of Client too?
Are you able to use the reflection to search/obtain a socket reference from Client?
If you say you don't have the socket, how could it be that you can add that socket in a HashMap?
Last but not least: why do you need to store the socket? Maybe every client opens one connection which is used for multiple requests?
It could be beautiful if all the answers could be conveyed to just one ip:port...
I sometimes receive already received packets (I used sniffer and system ACKs them). Now I read all data (until socket timeout) and then send new request, but this is ugly. I was thinking about using sequence numbers but i didn't find it in Socket interface. Any clues?
No you don't. If the receiving TCP stack misses a packet, it will re-request it, but it can't have delivered the original one to you, because it missed it. And if it gets a packet it has already received, it will drop it.
TCP will deliver all the bytes that are sent, in the order they are sent. Nothing else (well, except some edge cases around disconnects).
Something else is going on.
EDIT:
To be clear, I'm talking about the bytes that are delivered to your application through the socket's InputStream. What happens on the wire is largely irrelevant unless you have some horrific network retransmission problem that you're trying to investigate. And if the receiving stack does get a duplicate packet, it will ACK it, because if it didn't then the sender would re-send it... again.
It sounds like you're trying to account for things that TCP already takes care of. It has sequence numbers built in and will take care of any lost data for you, and from the receiving point you should be waiting until you receive all your expected data, rather than reissuing a request. If you don't want to wait for a response to complete before issuing a new request, consider pipe-lining requests with multiple connections.
I'm making my own custom server software for a game in Java (the game and original server software were written with Java). There isn't any protocol documentation available, so I am having to read the packets with Wireshark.
While a client is connecting the server sends it the level file in Gzip format. At about 94 packets into sending the level, my server crashes the client with an ArrayIndexOutOfBoundsException. According to the capture file from the original server, it sends a TCP Window Update at about that point. What is a TCP Window Update, and how would I send one using a SocketChannel?
TCP windows are used for flow control between the peers on a connection. With each ACK packet, a host will send a "window size" field. This field says how many bytes of data that host can receive before it's full. The sender is not supposed to send more than that amount of data.
The window might get full if the client isn't receiving data fast enough. In other words, the TCP buffers can fill up while the application is off doing something other than reading from its socket. When that happens, the client would send an ACK packet with the "window full" bit set. At that point, the server is supposed to stop sending data. Any packets sent to a machine with a full window will not be acknowledged. (This will cause a badly behaved sender to retransmit. A well-behaved sender will just buffer the outgoing data. If the buffer on the sending side fills up too, then the sending app will block when it tries to write more data to the socket!)
This is a TCP stall. It can happen for a lot of reasons, but ultimately it just means the sender is transmitting faster than the receiver is reading.
Once the app on the receiving end gets back around to reading from the socket, it will drain some of the buffered data, which frees up some space. The receiver will then send a "window update" packet to tell the sender how much data it can transmit. The sender starts transmitting its buffered data and traffic should flow normally.
Of course, you can get repeated stalls if the receiver is consistently slow.
I've worded this as if the sender and receiver are different, but in reality, both peers are exchanging window updates with every ACK packet, and either side can have its window fill up.
The overall message is that you don't need to send window update packets directly. It would actually be a bad idea to spoof one up.
Regarding the exception you're seeing... it's not likely to be either caused or prevented by the window update packet. However, if the client is not reading fast enough, you might be losing data. In your server, you should check the return value from your Socket.write() calls. It could be less than the number of bytes you're trying to write. This happens if the sender's transmit buffer gets full, which can happen during a TCP stall. You might be losing bytes.
For example, if you're trying to write 8192 bytes with each call to write, but one of the calls returns 5691, then you need to send the remaining 2501 bytes on the next call. Otherwise, the client won't see the remainder of that 8K block and your file will be shorter on the client side than on the server side.
This happens really deep in the TCP/IP stack; in your application (server and client) you don't have to worry about TCP windows. The error must be something else.
TCP WindowUpdate - This indicates that the segment was a pure WindowUpdate segment. A WindowUpdate occurs when the application on the receiving side has consumed already received data from the RX buffer causing the TCP layer to send a WindowUpdate to the other side to indicate that there is now more space available in the buffer. Typically seen after a TCP ZeroWindow condition has occurred. Once the application on the receiver retrieves data from the TCP buffer, thereby freeing up space, the receiver should notify the sender that the TCP ZeroWindow condition no longer exists by sending a TCP WindowUpdate that advertises the current window size.
https://wiki.wireshark.org/TCP_Analyze_Sequence_Numbers
A TCP Window Update has to do with communicating the available buffer size between the sender and the receiver. An ArrayIndexOutOfBoundsException is not the likely cause of this. Most likely is that the code is expecting some kind of data that it is not getting (quite possibly well before this point that it is only now referencing). Without seeing the code and the stack trace, it is really hard to say anything more.
You can dive into this web site http://www.tcpipguide.com/free/index.htm for lots of information on TCP/IP.
Do you get any details with the exception?
It is not likely related to the TCP Window Update packet
(have you seen it repeat exactly for multiple instances?)
More likely related to your processing code that works on the received data.
This is normally just a trigger, not the cause of your problem.
For example, if you use NIO selector, a window update may trigger the wake up of a writing channel. That in turn triggers the faulty logic in your code.
Get a stacktrace and it will show you the root cause.