I have a problem with an J2ME client app, that sends data to an J2SE server, and immediately closes the sending socket. On the J2ME side, i use a ordinary OutputStream on a SocketConnection, and repeatedly call write with small packets of data (~30 bytes). Afterwards, I flush and finally close the stream and the connection.
When running the client in the emulator, everything works fine. But with the real device I get some problems...
What I noticed is that the connection is not correctly closed, no matter what I do on the client. I always get an Connection reset exception on the server, which according to TCP indicates an error in the connection or sender, meaning that all subsequent data is to be abandoned and the connection no longer to be used. (With the emulator, the read on the server eventually returns -1, indicating that the connection was correctly closed, no exception at all...)
I tried to play with the total packet size (1024, 2048, ...) and with the client side socket options (Delay, Linger, Keep alive). I also tried a Thread.sleep between flush and close... On the server side, different things happen:
Only the first packet of around 30 byte is received, then exception (w/o delay)
Part of the data is received (~1500 bytes), then no more data is read, and no exception thrown, blocking in the read method forever (with delay and total size around 2048)
All data is correctly received, then exception (with delay and total size around 1024)
In all cases, the client has successfully sent the whole data.
What is the best way to ensure that all data will be received by the other side? As I said, the J2ME client states that all data was successfully written! (The total size can't be fixed to a specific value)
Thanks in advance!
That looks like a bug in your device you can send a message from the J2SE side to acknowledge the reception and wait on the J2ME side that you receive this message to close the socket.
Related
Imagine I have a server that can produce messages at a rate of 10,000 messages per second. But my client can only receive up to a maximum of 1000 messages per second.
System 1
If my system sends 1000 messages in the 1st milisecond and then does nothing for the remaining 999ms.
System 2
My system sends 1 message per milisecond, hence in 1000ms (1second) it will send 1000 messages.
Q1) Which system is better given that the client can handle a maximum of 500 messages per second?
Q2) What will be the impact of system 1 on the client? Will it overwhelm the client?
Thanks
Wil it overwhelm the client: It depends of the size of your messages, and the socket buffer size. The messages the sender sends are buffered. If the client cannot consume because the buffer is full, the output stream the sender is using will block. When the client has consumed some messages, the sender can continue writing as his OutputStream gets unblocked.
A typical buffer size on a windows system used to be 8192 bytes, but size can differ by the OS and settings in the OS.
So System 1 will not overwhelm the client, it will block at a certain moment.
What is the best approach merely depends on the design of your application.
For example: I had a similar issue while writing to an Arduino via USB (not socket-client, but otherwise the same problem). In my problem, buffered messages where a problem because it were positions of a face tracking camera. Buffered positions were no longer relevant when the Arduino read them, but it MUST process them because such a buffer is a queue, and you can only get the most recent if you read out the old one's. The Arduino could never keep up with the messages being produced, because by the time a new position reached the Arduino code, it was outdated. So that was an "overwhelm".
I resolved this by using bi-directional communication. The Arduino would send a message to the producer saying: READY (to receive a message). Then the producer would send one (up-to-date) face tracking position. Then the Arduino repositioned the camera and requested a new message. This way, there was a kind of flow control, that prevented the producer to overflow the Arduino.
Neither is better. TCP will alter the actual flow whatever you do yourself.
Neither will overwhelm the client. If the client isn't keeping up, its socket receive buffer will fill up, and so wil your socket send buffer, and eventually you will block in send, or get EAGAIN/EWOULDBLOCK if you're in non-blocking mode.
I have a Red5 client implementation which publishes streams, loaded from video file to our wowza media server. The problem is that if stream name is to big - approximately more than 90 symbols - the client is not publishing it and fails silently. All others actions expected from client are fulfilled: it connects to server and creates a stream. But never publishes the stream. I dont see a corresponding RTMP message and I dont see a resulting reaction in the logs of wowza.
I tried to debug the client and tracked the execution until it starts to write to SocketChannel. Everything is the same for the cases of execution of shorter named stream (which publishes ok), and stream with the long name, which RTMP command "to publish" is never sent.
A questions are:
whats up?
if I have written some bytes to SocketChannel without any exceptions thrown - does it guarantees that the corresponding message was sent?
if I have written some bytes to SocketChannel without any exceptions thrown - can I check by the means of my OS (MACOS in my case) whether the bytes were really written somewhere? Although I know, by the means of WireShark, that this piece of data was never sent.
UPDATE
...and which is even more strange - after sending "the big" packet sending a smaller one doesn't help. No packets can be sent after a packet of bigger length have been submitted to the socket.
If I have written some bytes to SocketChannel without any exceptions thrown - does it guarantees that the corresponding message was sent?
It guarantees that the data has been buffered locally in the socket send buffer, up to the count returned by write(). Nothing more.
As you can't send further data, it sounds to me as though the receiver isn't reading the large piece of data. Is it possibly failing with an exception and ceasing to read altogether?
I am wondering if there is a way to avoid having a TCP RST flag set as opposed to a TCP FIN flag when closing a connection in Netty where there is input data remaining in the TCP receive buffer.
The use case is:
Client (written in C) sends data packets containing many fields.
Server reads packets, encounters an error on an early field, throws an exception.
Exception handler catches the exception, writes an error message, and adds the close on write callback to the write future.
The problem is:
Remaining data in the receive buffer causes Linux (or Java..) to flag the TCP packets with the RST flag. This prevents the client from reading the data since when it gets around to trying it finds it has a read error due to the socket being closed.
With a straight Java socket, I believe the solution would be to call socket.shutdownOutput() before closing. Is there an equivalent function in Netty or way around this?
If I simply continue reading from the socket, it may not be enough to avoid the RST since there may or may not be data in the buffer exactly when close is called.
For reference: http://cs.baylor.edu/~donahoo/practical/CSockets/TCPRST.pdf
UPDATE:
Another reference and description of the problem: http://docs.oracle.com/javase/1.5.0/docs/guide/net/articles/connection_release.html
Calling shutdownOutput() should help with a more orderly closing of the connection (by sending a FIN), but if the client is still sending data then RST messages will be sent regardless (see answer from EJP. A shutdownOutput() equivalent may be available in Netty 4+.
Solutions are either to read all data from the client (but you can never be sure when the client will fully stop sending, especially in the case of a malicious client), or to simply wait before closing the connection after sending the response (see answer from irreputable).
If you can get hold of the underlying SocketChannel from Netty, which I am no expert about, you can call channel.socket().shutdownOutput().
Remaining data in the receive buffer causes Linux (or Java..) to flag
the TCP packets with the RST flag. This prevents the client from
reading the data since when it gets around to trying it finds it has a
read error due to the socket being closed.
I don't understand this. TCP guarantees that the client will receive all the data in its socket receive buffer before he gets the FIN. If you are talking about the server's socket receive buffer, it will be thrown away by the close(), and further attempts by the client to send will get an RST which becomes an IOException: connection reset', because there is no connection to associate it with and therefore nowhere to put it. NB It is TCP that does all this, not Java.
But it seems to me you should read the whole request before closing the channel if it's bad.
You could also try increasing the socket receive buffer so it is big enough to hold an entire request. That ensures that the client won't still be sending when you want to close the connection. EDIT: I see the request is megabytes so this won't work.
Can you try this: after server writes the error message, wait for 500ms, then close(). See if the client can receive the error message now.
I'm guessing that the packets in the server receive buffer have not been ACK-ed, due to TCP delayed acknowledgement. If close() is called now, the proper response for these packets is RST. But if shutdownOutput() is invoked, it's a graceful close process; the packets are ACK-ed first.
EDIT: another attempt after learning more about the matter:
The application protocol is, the server can respond anytime, even while the client request is still being streamed. Therefore the client should, assuming blocking mode, have a separate thread reading from server. As soon as the client reads a response from server, it needs to barge into the writing thread, to stop further writing to the server. This can be done by simply close() the socket.
On the server side, if the response is written before all request data are read, and close() is called afterwards, most likely RST will be sent to client. Apparently most TCP stacks send RST to the other end if close() is called when the receive buffer isn't empty. Even if the TCP stack doesn't do that, very likely more data will arrive immediately after close(), triggering RST anyway.
When that happens, the client will very likely fail to read the server response, hence the problem.
So the server can't immediately close() after response, it needs to wait till client receives the response. How does the server know that?
First, how does the client know that it has received the full response? That is, how is the response terminated? If response is terminated by TCP FIN, the server must send FIN after response, by calling shutdownOutput(). If the response is self-terminated, e.g. by HTTP Content-Length header, the server needs not to call shutdownOutput().
After the client receives the full response, per protocol, it should promptly quit sending more data to the server. This is done by crudely sever the connection; the protocol didn't design a more elegant way. Either FIN or RST is fine.
So the server, after writing the response, should keep reading from the client, till EOF or error. Then it can close() the socket.
However, there should be a timeout for this step, to account for malicious/broken clients and network problems. Several seconds should be sufficient to complete the step in most cases.
Also, the server may not want to read from the client, since it isn't free. The server can simply wait past the timeout, then close().
I have an as3 serversocket, we are connecting to the this serversocket via java socket(an android app). The problem is that, when serversocket application force closed, i couldnt find the a way to understand that remote socket is closed or unreachable, except when trying to flush() a message to that socket throws broken pipe error. The reason that i want to solve this problem in another way is not to make server application busy with connection check messages ?
A TCP connection is not a continuous flow like a river. It's a sequence of segments sent at discrete times - thus is more like cars on a road. So there is no way to be notified if the road is broken, until a car tries to reach the other end, fails, and calls you back.
You should simply keep sending cars at reasonable intervals (30 seconds?), thus instructing your server to do nothing when it receives a NOOP message (short for No Operation). The client will be programmed to start sending NOOP when the connection is idle (no message sent or received for 30 seconds), and when the driver calls back that he can't reach his destination, you close the current socket and attempt to create another one.
There are 3 ways for TCP to understand that connection has been closed:
One side got an error - it sends a Reset request to the other side to close the connection
One side actually wants to close the connection so it sends a FIN message to the other side
A time-out when trying to transmit something from one side to another. This time-out can only happen when a tcp-package is sent. After a connection has been made , tcp's packages are only sent on a data transmitting request (and connection closing request...).
So if one side disconnects suddenly, not in middle of data transmitting, there is no way of spotting a disconnection , but to send some package once a while.
You've found the only way. TCP doesn't provide any way of checking the state of a connection other than trying to use it. You can use this more intelligently, for example you can send yourself heartbeat messages and so forth. But there is no API that will tell you.
What you can do is let the server broadcast a I am alive message every 5/10 mins :)
I'm making my own custom server software for a game in Java (the game and original server software were written with Java). There isn't any protocol documentation available, so I am having to read the packets with Wireshark.
While a client is connecting the server sends it the level file in Gzip format. At about 94 packets into sending the level, my server crashes the client with an ArrayIndexOutOfBoundsException. According to the capture file from the original server, it sends a TCP Window Update at about that point. What is a TCP Window Update, and how would I send one using a SocketChannel?
TCP windows are used for flow control between the peers on a connection. With each ACK packet, a host will send a "window size" field. This field says how many bytes of data that host can receive before it's full. The sender is not supposed to send more than that amount of data.
The window might get full if the client isn't receiving data fast enough. In other words, the TCP buffers can fill up while the application is off doing something other than reading from its socket. When that happens, the client would send an ACK packet with the "window full" bit set. At that point, the server is supposed to stop sending data. Any packets sent to a machine with a full window will not be acknowledged. (This will cause a badly behaved sender to retransmit. A well-behaved sender will just buffer the outgoing data. If the buffer on the sending side fills up too, then the sending app will block when it tries to write more data to the socket!)
This is a TCP stall. It can happen for a lot of reasons, but ultimately it just means the sender is transmitting faster than the receiver is reading.
Once the app on the receiving end gets back around to reading from the socket, it will drain some of the buffered data, which frees up some space. The receiver will then send a "window update" packet to tell the sender how much data it can transmit. The sender starts transmitting its buffered data and traffic should flow normally.
Of course, you can get repeated stalls if the receiver is consistently slow.
I've worded this as if the sender and receiver are different, but in reality, both peers are exchanging window updates with every ACK packet, and either side can have its window fill up.
The overall message is that you don't need to send window update packets directly. It would actually be a bad idea to spoof one up.
Regarding the exception you're seeing... it's not likely to be either caused or prevented by the window update packet. However, if the client is not reading fast enough, you might be losing data. In your server, you should check the return value from your Socket.write() calls. It could be less than the number of bytes you're trying to write. This happens if the sender's transmit buffer gets full, which can happen during a TCP stall. You might be losing bytes.
For example, if you're trying to write 8192 bytes with each call to write, but one of the calls returns 5691, then you need to send the remaining 2501 bytes on the next call. Otherwise, the client won't see the remainder of that 8K block and your file will be shorter on the client side than on the server side.
This happens really deep in the TCP/IP stack; in your application (server and client) you don't have to worry about TCP windows. The error must be something else.
TCP WindowUpdate - This indicates that the segment was a pure WindowUpdate segment. A WindowUpdate occurs when the application on the receiving side has consumed already received data from the RX buffer causing the TCP layer to send a WindowUpdate to the other side to indicate that there is now more space available in the buffer. Typically seen after a TCP ZeroWindow condition has occurred. Once the application on the receiver retrieves data from the TCP buffer, thereby freeing up space, the receiver should notify the sender that the TCP ZeroWindow condition no longer exists by sending a TCP WindowUpdate that advertises the current window size.
https://wiki.wireshark.org/TCP_Analyze_Sequence_Numbers
A TCP Window Update has to do with communicating the available buffer size between the sender and the receiver. An ArrayIndexOutOfBoundsException is not the likely cause of this. Most likely is that the code is expecting some kind of data that it is not getting (quite possibly well before this point that it is only now referencing). Without seeing the code and the stack trace, it is really hard to say anything more.
You can dive into this web site http://www.tcpipguide.com/free/index.htm for lots of information on TCP/IP.
Do you get any details with the exception?
It is not likely related to the TCP Window Update packet
(have you seen it repeat exactly for multiple instances?)
More likely related to your processing code that works on the received data.
This is normally just a trigger, not the cause of your problem.
For example, if you use NIO selector, a window update may trigger the wake up of a writing channel. That in turn triggers the faulty logic in your code.
Get a stacktrace and it will show you the root cause.