I have a Java TLS client that can send a series of requests to a server, each followed by a response from the server.
However, there are many different servers. Some are "multi-message" servers that keep a connection open after the first request, so that subsequent requests can be sent over the first connection. Others are "single-message" servers that close the connection after each message and so a new connection is required for subsequent messages. There is no a priori way for the client to know what type of server it is talking to, nor to fix the servers.
It is very desirable for single-message servers to be able to resume a session without the full handshake.
My original client code just tried to send subsequent requests down the same connection. If that failed it just opened a new connection to the server. It could thus handle both single and multi-message servers.
However, the failure when sending the second message to single-message servers seems to kill the session resumption.
My dirty work around is to notice if a message fails and then assume that it is talking to a single-message server, in which case the client then explicitly closes the socket after each response has been received. This enables subsequent connections to resume sessions.
But there has to be a better way. Testing for isInputShutdown or isConnected does not help, unsurprisingly, as there are timing issues. The connection failure for single-message server actually happens during the read of the response, after the write of the request, presumably due to buffering.
Any ideas much appreciated?
Your initial solution is correct in the case of plaintext: you will get an IOException: connection reset by peer when sending the second message, and you can just recover accordingly by reconnecting.
However in the TLS case it won't work, as you will not get IOException: connection reset by peer but a SocketException, due to a fatal TLS unexpected_message alert. RFC 2246 #7.2 states:
Alert messages with a level of fatal result
in the immediate termination of the connection. In this case, other
connections corresponding to the session may continue, but the
session identifier must be invalidated, preventing the failed session
from being used to establish new connections.
[my emphasis], because the failed session is now deemed insecure, and
unexpected_message
An inappropriate message was received. This alert is always fatal
and should never be observed in communication between proper
implementations.
Your second solution seems appropriate to me.
NB:
isInputShutdown() can never be true, as you can't call shutdownInput() on an SSLSocket.
isConnected() will never be false once the socket has been connected.
Both tell you about the state of your socket, not of the connection, so even trying them was futile. And it has nothing to do with 'timing issues'.
Related
I have a server setup using MINA version 2.
I don't have much experience with sockets and tcp.
The problem is if I make a connection to my server, and then unplug my internet and close the connection, (Server doesn't get notification of the connection being closed) the server will forever think that my connection is still active and valid.
The server will continue to send messages to my connection, and doesn't throw any exceptions even though there is nothing on my computer binded to the local port.
How can I test that the connection still exists?
I've tried running MINA logging in debug mode, and logging the
IoSession.isConnected() IoSession.isActive IoSession.isClosing
They always return true, true, false. Also, in debug mode, there was no useful information stating that the connection was lost. It just logged the regular "sent message" stuff, as if there was nothing wrong.
From using Flash actionscript, I have had experiences where flash will throw errors that it's operating on an invalid socket. That leads me to believe that it's saying the socket on the server is no longer valid for the connection. So in other words if flash can detect invalid sockets, a Java server should be able to detect it too correct?
If there is truly no way to detect dead connections, I can always setup a connection keep alive routine where the client is constantly sending an "I'm here" message to the server, and the server closes sessions that havent had an incoming message for a period of seconds.
EDIT: After learning that "sockets" are private and never shared over the network I managed to find better results for my issue and I found this SO thread.
Java socket API: How to tell if a connection has been closed?
Unfortunately
IOException 'Connection reset by peer' Doesn't occur when I write to
the IoSession in MINA.
Edit:
Is there any way at all in Java to detect when an ACK to a TCP packet was not received after sending a packet? An ACK Timeout?
Edit:
Yet apparantly, my computer should send a RST to the server? According to this answer. https://stackoverflow.com/a/1434592/4425643
But that seems like a bad way of port scanning. Is this how port scanning works? Port scanners send data to a port and the victim's service responds with a RST? Sorry I think I need a new question for all this. But it's odd that MINA doesn't throw connection reset by peer when it sends data. So then my computer doesn't send a RST.
The concept of socket or connection in Internet protocols is an illusion. It's a convenient abstraction that is provided to you by the operating system and the TCP stack, but in reality, it's all fake.
Under the hood, everything on the Internet takes the form of individual packets.
From the perspective of a computer sending packets to another computer, there is no built-in way to know whether that computer is actually receiving the packets, unless that computer (or some other computer in between, like a router) tells you that the packets were, or were not, received.
From the perspective of a computer expecting to receive packets from another computer, there is no way to know in advance whether any packets are coming, will ever come, or in what order -- until they actually arrive. And once they arrive, just the fact that you received one packet does not mean you'll receive any more in the future.
That's why I say connections or sockets are an illusion. The way that the operating system determines whether a connection is "alive" or not, is simply by waiting an arbitrary amount of time. After that amount of time -- called a timeout -- if one side of the TCP connection doesn't hear back from the other side, it will just assume that the other end has been disconnected, and arbitrarily set the connection status to "closed", "dead" or "terminated" ("timed out").
So:
Your server has no clue that you've pulled the plug on your Internet connection. It has no way of knowing that.
Your server's TCP stack has been configured a certain way to wait an arbitrary amount of time before "giving up" on the other end if no response is received. If this timeout is set to a very large period of time, it may appear to you that your server is hanging on to connections that are no longer valid. If this bothers you, you should look into ways to decrease the timeout interval.
Analogy: If you are on a phone call with someone, and there's a very real risk of them being hurt or killed, and you are talking to them and getting them to answer, and then the phone suddenly goes dead..... Well, how long do you wait? At what point do you assume the other person has been hurt or killed? If you wait a couple milliseconds, in most cases that's too short of a "timeout", because the other person could just be listening and thinking of how to respond. If you wait for 50 years, the person might be long dead by then. So you have to set a reasonable timeout value that makes sense.
What you want is a KeepAlive, heartbeat, or ping.
As per #allquicatic's answer, there's no completely reliable built-in method to do this in TCP. You'll have to implement a method to explicitly ask the client "Are you still there?" and await an answer for a specified amount of time.
https://en.wikipedia.org/wiki/Keepalive
A keepalive (KA) is a message sent by one device to another to check that the link between the two is operating, or to prevent this link from being broken.
https://en.wikipedia.org/wiki/Heartbeat_(computing)
In computer science, a heartbeat is a periodic signal generated by hardware or software to indicate normal operation or to synchronize other parts of a system.[1] Usually a heartbeat is sent between machines at a regular interval in the order of seconds. If a heartbeat isn't received for a time—usually a few heartbeat intervals—the machine that should have sent the heartbeat is assumed to have failed.[2]
The easiest way to implement one is to periodically send an arbitrary piece of data - e.g. a null command. A properly programmed TCP stack will timeout if an ACK is not received within its specified timeout period, and then you'll get a IOException 'Connection reset by peer'
You may have to manually tune the TCP parameters, or implement your own functionality if you want more fine-grained control than the default timeout.
The TCP framework is not exposed to Java. And Java does not provide a means to edit TCP configuration that exists on the OS level.
This means we cannot use TCP keep alive in Java efficiently because we can't change its default configuration values. Furthermore we can't set the timeout for not receiving an ACK for a message sent. (Learn about TCP to discover that every message sent will wait for an ACK (acknowledgement) from the peer that the message has been successfully delivered.)
Java can only throw exceptions for cases such as a timeout for not completing the TCP handshake in a custom amount of time, a 'Connection Reset by Peer' exception when a RST is received from the peer, and an exception for an ACK timeout after whatever period of time that may be.
To dependably track connection status, you must implement your own Ping/Pong, Keep Alive, or Heartbeat system as #Dog suggested in his answer. (The server must poll the client to see if it's still there, or the client has to continuosly let the server know it's still there.)
For example, configure your client to send a small packet every 10 seconds.
In MINA, you can set a session reader idle timeout, which will send an event when a session reader has been idle for a period of time. You can terminate that connection on delivery of this event. Setting the reader timeout to be a bit longer than the small packet interval will account for random high latency between the client and server. For example, a reader idle timeout of 15 seconds would be lenient in this case.
If your server will rarely experience session idling, and you think you can save bandwidth by polling the client when the session has gone idle, look into using the Apache MINA Keep Alive Filter.
https://mina.apache.org/mina-project/apidocs/org/apache/mina/filter/keepalive/KeepAliveFilter.html
I am encountering an interesting issue wherein a TCP connection for a HTTP 1.1 POST request is being closed immediately following the request (ie, before the response can be sent by the server).
A few details about the test environment:
Client - Windows XP, Internet Explorer 8, Flash player 12.
Server - Java 7
Prior to the aforementioned behaviour, we have several longstanding TCP connections, each being reused for multiple HTTP requests; we open a long poll and when this poll completes, open another. We see several hours of well behaved and reused TCP connections opening polls as the previous poll closes.
Eventually -- sometimes after 12 or more hours of normal behaviour -- a poll on a long standing connection will send the HTTP POST and immediately send a TCP FIN before the server can write the response.
The client behaviour is to keep a poll open at all times, so at this point we try to open a new poll.
A new TCP connection is then opened by the client sending another HTTP POST, with the same behaviour; the request is sent, followed by a FIN from the client.
This behaviour can continue for several minutes, until the server can finally respond to kill the client. (The server detects the initial closed connection by encountering an IO Exception, the next time it can communicate with the client, the response is to tell the client to close)
Edit: We are opening connections only through the Flash client, and are not delving into low level TCP code. While Steffen Ullrich is correct, and the single sided shutdown is possible and should be dealt with, what is not clear is why a single sided shutdown is occurring at this (seemingly arbitrary) point. We are not calling close from the application to instigate this behaviour.
My questions are:
Under what circumstances would a TCP connection for a HTTP request be terminated prior to the response being received? I understand this is bad behaviour, and an incomplete HTTP transaction, so presumably something lower down is terminating the connection for an unknown reason.
Are there any diagnostics that could be used to help understand the problem? (We are currently monitoring server and client side activity with Wireshark.)
Notes:
In Wireshark, the behaviour we see is:
Longstanding TCP connection (#1) serving multiple HTTP requests.
HTTP request is made over #1.
Server ACKs the request.
Client sends FIN to close connection #1. Server responds with FIN,ACK. (The expected traffic would be the server sending the HTTP response). Around this point the server experiences an IO Exception.
Client opens connection #2 and sends HTTP request.
Behaviour continues as from 3.
Sending a request immediatly followed by a FIN is not a connection close, but shutdown of writing shutdown(socket,SHUT_WR). The client tells the server this way that it will not send any more data, but it might still receive data. It's not that uncommon.
I have a HTTP server with runs with HTTP and HTTPS, written using Javas NIO and SSL libraries. In HTTPS mode it can communicate with or without the client certificate. However, I would like to perform renegotiation. Here the client will connect with HTTPS, browse resources and then when they hit a highly secure resource the server challenges the client for its certificate. I have been having a few problems with this and need to know what the workflow should be. Here is what I have observed with both IE 9 and Chrome.
1) When the client requests the secure resource, I respond to the HTTP request in full. I then challenge the client for their cert upon completion with
engine.setNeedClientAuth(true);
engine.beginHandshake();
The result is a TCP FIN from the client (it closes its side of the connection), and the renegotiation fails.
2) When the client requests the secure resource, I challenge for the cert before responding. In this scenario the exchange occurs, both browsers will popup a request for the cert, however as soon as it pops up the prompt a TCP FIN is sent from the client and renegotiation terminates. The client then sends another request which eventually has the certificate, at times I have to challenge twice.
So my question here is, what is supposed to happen? Is the initial browser connection supposed to remain open, or is termination like this normal?
NOTE: Another very interesting observation here is that, in scenario 2, when the browser closes the TCP connection, it then reconnects after you choose the certificate. It does not however repost the request, it just sits there and expects the server to respond? In NIO terminology its sits waiting on an OP_READ, which means there is no data on the socket input buffer. Do the browsers expect a response to the original message that it terminated the connection for??
Strange that there is absolutely no documentation or a specification for this workflow, yet for all the browsers I've tested they seem to follow this workflow.
(1) is insecure and therefore pointless to discuss further. You've already leaked the information before you even ask for the credentials.
(2) is the correct way to do this. The client shouldn't be closing the connection if it is configured to allow renegotiation. Due to an SSL security problem last year or so there was temporarily a phase where SSL renegotiation was disallowed by default. You may be running into this. In that case you should be issuing an HTTP redirect first, and closing the connection at your end to force the client to use a new connection, and the new connection should ask for a client certificate. How you arrange that in your code is up to you.
How will the server know of client connection loss? does this trigger an event? is this possible to store code (server side) so that it can execute before the connection loss happen?
This connection loss can happen if:
being idle for too long.
client side terminated.
etc.
This i am asking in particular to Jsp and php.
It depends on the protocol you're talking about, but a "connection" is typically established through a three-way handshake, which causes both parties to simply agree that they're "connected" now. This means both parties remember in a table that there's an open connection to IP a.b.c.d on port x and what context this "connection" is associated with. All incoming data from that "connection" is then passed to the associated context.
That's all there is to it, there's no real "physical" connection; it's just an agreed upon state between two parties.
Depending on the protocol, a connection can be formally terminated with an appropriate packet. One party sends this packet to the other, telling it that the "connection" is terminated; both parties remove the table entries and that's that.
If the connection is interrupted without this packet being sent, neither party will know about it. Only the next time one party tries to send data to the other will this problem become apparent.
Depending on the protocol a connection may automatically be considered stale and terminated if no data was received for a certain amount of time. In this case, a dead connection will be noticed sooner, but requires a constant back and forth of some sort between both parties.
So in short: yes, there is a server event that can be triggered, but it is not guaranteed to be triggered.
When you close a socket, the socket on the other end is notified. However, if the connection is lost ungracefully (e.g. a network cable is unplugged, or a computer loses power), then you probably will not find out.
To deal with this, you can send periodic messages just to check the connection. If the send fails, then the connection has been interrupted. Make sure you set up your sockets to only wait for a reasonable amount of time, though.
If you are talking about a typical client server architecture, server shouldn't bother about the connection to the client. Only client should bother about connection to server. Client should take measures to avoid the connection being dropped like periodically sending a keep alive message or similar to avoid timeout.
Why does server need to bother about connection loss/termination.
Server job is to serve the request which comes from the client. That's it. If client doesn't receive the data it expected from Server then it can take appropriate action. If connection gets disconnected when server is doing some processing for giving data to client; then also server can't do much as http request is initiated by client.
So client can make a new request if for some reason it didn't get response.
My TCP server is implemented using Netty. My client using vanilla java.net.Socket to connect to this server. I'm using the same socket to send multiple requests to the server. Once done with all the requests the client calls socket.close().
I'm not closing the channel anywhere in my server code. Also, I've set TCP KEEP_ALIVE on my server. Will closing the socket on the client end automatically close the channel on the server or do I've to do something else explicitly and what is the best practice ?
Usually, if an application closes a socket, its remote peer also notices that the closure. Therefore, you don't need to call close() on both side. However, sometimes, due to network problems, you might not get notified when the remote peer closes the connection. To work around this problem, it's a good idea to send some message periodically, and then you will detect the unexpected closure sooner.
Please note SO_KEEP_ALIVE will not help much here because for most operating systems because the default keep alive time is very long.