The answer, here, by Stephen C very well describes the issue. He says Broken pipe exception is caused by something causing the connection to close, and its not the application. I want to know what all this "something" could be in general, which is causing the connection to close? And what are the possible ways to handle them?
My usage environment:
I am running my application on set of machines on Azure, and all of them are talking to one of the machine. I am getting this error almost always.
Could TCP timeout be one of the reasons? If yes then how to make Socket Channels(in affect Socket running behind them) never close dues to TCP timeout?
You can get the socket associated with SocketChannel and then set its keepAlive property. Something like this.
SocketChannel sockChannel;
/*
connect here
*/
sockChannel.socket().setKeepAlive(true);
Broken pipe exception comes whenever client moves away from the socket on which it is listening. This might be due to socket timeout reached on client side as server was responding slow. Say, in case of a browser if any http request is taking long to respond and user closes the browser a broken pipe exception will be visible in application logs.
Now to resolve this either you increase socket timeout or fix you server response.
Related
I'm writing a client-server application, using java TCP sockets.
Client and server are connected by a socket.
Sometimes server has to write a reply message for the client on this socket.
But in that moment, client's socket could be closed, not using close() method, but closing client's application.
Can you tell me, how server can recognize this situation, and avoid writing his reply message on this socket?
This is impossible to do reliably. If you establish that a connection is open, by the time you get around to writing to it, it may have been closed. The reliable solution is to attempt the write, and handle any errors that may result.
Note that if you do get an error indication, there is no saying how much data got to the remote peer. If you perform two writes, and the second write gets an error indication, it is quite possible that the remote peer shut down before the first write but the local peer only noticed it during the second write.
Could be related: Difference between Connection timed out and Read timed out
I have written a java server application using nio.
I connected a client to my server application and unplugged the network cable of the client. On the server side, I didn't get any exception immediately but after some time (8 minutes or so), I got a "IOException: Connection timed out"
Here is a partial stack trace:
java.io.IOException: Connection timed out
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:225)
at sun.nio.ch.IOUtil.read(IOUtil.java:198)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:375)
........
Till this time, when I saw the netstat output, I see that the socket state of this particular client connection is shown as ESTABLISHED.
Questions are:
Is this timeout configurable?
Why does the netstat output show the socket state as ESTABLISHED? Ideally it should be CLOSE_WAIT (as the client got disconnected)
No it is not configurable. It is the result of retransmit timeouts. It wouldn't happen at all unless the application kept writing, or had pending writes when the disconnect happened.
It shouldn't be CLOSE_WAIT, as no FIN had been received. Ergo it should be ESTABLISHED.
That timeout is generally not configurable as it depends on the possibilities offered by the operating system. Unix in general does not allow a process to fix the connection timeout and generally it is fixed to around two minutes. Perhaps some versions of linux/BSD systems allow this to be configured, but that's not portable and normally is not allowed to fix it to the user (only the administrator). This has to do with the number of retransmissions and the timeouts used for each try, and is under the exclusive control of the TCP implementation.
When you finish a connection you pass through two states (FIN_WAIT and TIME_WAIT) that are not timeout states. The first of two is to get the other end's response (you can close your side of the connection telling the other side you are not going to send more data, but you have to wait for the other end to do the same thing) The TIME_WAIT is a special state that the kernel maintains for a closed connection to process (and discard) all the possible retransmissions of the last frames that can be in course after the connection is closed. They have nothing to do with timeouts.
A tcp connection has no timeout implicit. Two machines can pass weeks without interchanging any info if they have nothing to transmit. You can control the use of some kind of heartbeat between silenting connections to check their liveness with one socket option (SO_KEEPALIVE) This option makes the tcps at both sides to interchange empty packets to know if the other side is still alive. Again, you can only control the use of this packets, not the frequency or the number of lost frames that closes the connection (this can be configured in linux, but touching the kernel configuration only in administrator mode)
Note 1 (answer to #Krishna Chaitanya P)
If you unplugged the cable and got an exception some time later, it can be one of two reasons for that to happen:
You continued writing to that connection and the sending buffer filled up without being acknowledged in time (this is rare, as normally your process get blocked in write(2) system call when this happens) and some timeout (in the java implementation of socket) did occur.
Your java implementation of tcp socket uses the SO_KEEPALIVE option (the most probable thing). As I said before, you have boolean control to use or not use it, but you cannot adjust the time between keepalives or the number of them that drops your connection. Try to call getKeepAlive()/setKeepAlive(boolean) methods on the Socket class to control this feature. I have not seen in the documentation if the connected socket is, by default, keepalived or not. This is, by far, a commonly used option in a server, as it allows to disconnect the clients that lose connections without telling to the server.
In my experience, the cause for this exception occurring for a connected socket was always due to a firewall closing connections that had been idle for too long. I've seen it happen in cloud evironments (AWS, Rackspace) in particular, but it's not limited to that. Most likely, you have some kind of firewall between the 2 connection peers, which closes idle connections after some time.
The best fix in an ideal world is to change the firewall configuration, provided you or an operations team has access to it. In any case, it's better if you can handle that use case in your code and gracefully terminate the communication with the other peer.
Because the CLOSE_WAIT state is for a FI waiting for its corresponding FIN from the peer and that is not the case here.
This TO is most probably configurable
My TCP server is implemented using Netty. My client using vanilla java.net.Socket to connect to this server. I'm using the same socket to send multiple requests to the server. Once done with all the requests the client calls socket.close().
I'm not closing the channel anywhere in my server code. Also, I've set TCP KEEP_ALIVE on my server. Will closing the socket on the client end automatically close the channel on the server or do I've to do something else explicitly and what is the best practice ?
Usually, if an application closes a socket, its remote peer also notices that the closure. Therefore, you don't need to call close() on both side. However, sometimes, due to network problems, you might not get notified when the remote peer closes the connection. To work around this problem, it's a good idea to send some message periodically, and then you will detect the unexpected closure sooner.
Please note SO_KEEP_ALIVE will not help much here because for most operating systems because the default keep alive time is very long.
I have a typical java client and a server. The client sends some request to the server and waits for the response. The client reads up to say 100 bytes of data from the contained input stream into an array of bytes. It waits for the complete response of 100 bytes to be read within a specified timeout period of say 3 secs. The problem here is to identify if the server went down or crashed while/before writing the response. Basically, we need to identify if the socket was broken or the peer disconnected for some reason. Is there a way to identify this?
How to identify a broken socket connection in Java immediately?
You can't detect it immediately, in Java or any other language. TCP/IP doesn't know, so Java can't know. The only sure way to detect a broken TCP connection is by writing to it and catching IOExceptions, and they won't happen immediately.
The best way to identity the connection is down is to timeout the connection. i.e. you expect a response in a given amount of time and flag if that response does not come as you expect.
When you have a graceful disconnection (.e.g the other end calls close()) the read on the connection will let you know once the buffer has been drained.
However, if there some other type of failure, you might not be notified until the OS times out the connection (e.g. after 3 minutes) and indeed, you may want to keep the connection. e.g. if you pull the network cable out for 10 seconds and put it back in, that doesn't need to be a failure.
EDIT: I don't believe its a good idea to be too aggressive in automatically handling connection/service "failures". This is usually better handled by a planned fix to the system, based on investigation of the true cause. e.g. increased bandwidth, redundant connectivity, faster servers, code fixes.
If connection is broken abnormally, you will receieve IOException when reading; that normally happens quite fast, but there is no guarantees about time - all depends on the OS, network hardware, etc. If remote end gracefully closes the socket, you'll read -1 as next byte.
Assuming everything else works, if the remote peer - the TCP server - was killed then the TCP client will normally receive a TCP RST (reset) and you'll get an IOException in your client application.
However, there are lots of other things that can go wrong besides a process being killed. Basically anything on the network path between the two processes: a cable is yanked, a router dies, a firewall dies, etc. All of this will not immediately be detected.
For the above reasons the general rule is - as pointed out in the answer from EJP - that a broken connection can only be detected by writing to it. This is why it is always recommended that a TCP client and TCP server exchange some type of heartbeat messages at regular intervals. There are different ways to do this. I like best the method where the TCP client will - in the absence of data being received from the TCP server - send a heartbeat message to the server and expect a reply back within a certain time period. This way heartbeat messages will only be sent when really needed.
A sub-optimal approach - if you cannot implement true heartbeating - is to always read with a timeout. Set the timeout on the socket and then catch java.net.SocketTimeoutException. This will allow you to know that no data has been received on socket during x milliseconds.
It should be mentioned that there's one scenario where you don't have to use heartbeating, nor using the socket timeout: if the TCP client and the TCP server communicate over a loopback interface then a broken connection will always be propagated to both the TCP client application and the TCP server application. This is because, in this case, there's really no network infrastructure between the two processes. So if you have an existing application which isn't well-designed with respect to its TCP communication (i.e. it doesn't implement some form of heartbeating or at least reading with a timeout), then as a last resort you may 'fix' the problem by moving the two application onto the same host and let them communicate over the loopback interface.
In Java, if I connect to a client to a server via a socket and the client has an exception which causes it to crash, is there any way the server can detect that the socket connection is lost.
I assume one method would be have some sort of heartbeat polling the client, but is there a simpler/easier way?
There are a few ways, depending on what you consider to be a "crash".
If the client process dies, the client OS will close the socket. The server can detect this by performing a read(), which will either return -1 (EOF) or raise a SocketException ("connection reset").
If the client gets into an infinite loop, the connection will remain open; the only way to detect this is by incorporating some kind of "heartbeat" into your protocol.
If the client host is rebooted or the network connection breaks, the server may not notice unless either:
the protocol has a "heartbeat" mechanism as above, with some kind of timeout, or
TCP keepalive is enabled on the socket by calling socket.setKeepAlive(true) - this instructs the OS to periodically* send a packet to check that the remote end of the connection is alive, closing the connection if not
*both Windows and Linux default to 2 hours; you can change this system-wide but not per-socket (under Java, anyway)
TCP sockets do this anyway, and Java exposes it to the developer. See setKeepAlive() / getKeepAlive() in the Socket class. If a TCP message isn't sent from the client to the server for a certain period of time, the socket will close, and the server can remove the session information because it still has its endpoint.