HTTP POST in Flash - TCP connection closed by client before response - java

I am encountering an interesting issue wherein a TCP connection for a HTTP 1.1 POST request is being closed immediately following the request (ie, before the response can be sent by the server).
A few details about the test environment:
Client - Windows XP, Internet Explorer 8, Flash player 12.
Server - Java 7
Prior to the aforementioned behaviour, we have several longstanding TCP connections, each being reused for multiple HTTP requests; we open a long poll and when this poll completes, open another. We see several hours of well behaved and reused TCP connections opening polls as the previous poll closes.
Eventually -- sometimes after 12 or more hours of normal behaviour -- a poll on a long standing connection will send the HTTP POST and immediately send a TCP FIN before the server can write the response.
The client behaviour is to keep a poll open at all times, so at this point we try to open a new poll.
A new TCP connection is then opened by the client sending another HTTP POST, with the same behaviour; the request is sent, followed by a FIN from the client.
This behaviour can continue for several minutes, until the server can finally respond to kill the client. (The server detects the initial closed connection by encountering an IO Exception, the next time it can communicate with the client, the response is to tell the client to close)
Edit: We are opening connections only through the Flash client, and are not delving into low level TCP code. While Steffen Ullrich is correct, and the single sided shutdown is possible and should be dealt with, what is not clear is why a single sided shutdown is occurring at this (seemingly arbitrary) point. We are not calling close from the application to instigate this behaviour.
My questions are:
Under what circumstances would a TCP connection for a HTTP request be terminated prior to the response being received? I understand this is bad behaviour, and an incomplete HTTP transaction, so presumably something lower down is terminating the connection for an unknown reason.
Are there any diagnostics that could be used to help understand the problem? (We are currently monitoring server and client side activity with Wireshark.)
Notes:
In Wireshark, the behaviour we see is:
Longstanding TCP connection (#1) serving multiple HTTP requests.
HTTP request is made over #1.
Server ACKs the request.
Client sends FIN to close connection #1. Server responds with FIN,ACK. (The expected traffic would be the server sending the HTTP response). Around this point the server experiences an IO Exception.
Client opens connection #2 and sends HTTP request.
Behaviour continues as from 3.

Sending a request immediatly followed by a FIN is not a connection close, but shutdown of writing shutdown(socket,SHUT_WR). The client tells the server this way that it will not send any more data, but it might still receive data. It's not that uncommon.

Related

Simulate an Http client disconnection before the server reply

The problem:
I am having some strange behaviour from a Jetty server (rest over https) when some client connections are closed (client-side) before the server has had time to reply. Normally this is well managed and expected by a webserver/application server but in a specific instance something breaks the server that stops replying.
I am trying to reproduce programmatically and locally the issue, opening a client connection and closing it before the server has had time to reply, but I do not have much experience with a situation like this, normally the clients I write are expected to not die immediately.
I am not interested in the language/application I have to use to replicate my case, it can be a Java program, a netcat command, telnet, dotnetcore... The only limit I have is that it should run on a Kubernetes pod, if possible.
I am trying to use Java to open a socket then close it immediately, or to create an Http client and stop it immediately after a request sent, but with no luck at the moment.
At the same time I am looking at netcat, but I fear it's too low level for a rest request.

How is it possible to send multiple POST requests over a single TCP connection (Java)?

I have a Spring Boot application running on Wildfly. A colleague is using curl to send a couple of files to my server.
When he sends the files, and we look at Wireshark, the two post requests are sent in the same frame.
My server processes them separately, reading the socket, saving the file, then closing the connection. Both requests get processed correctly.
How is this?
My guess is that when I close the connection in my code it's not closing the TCP connection, it's just finishing the HTTP request, but the actual socket remains open and the details are hidden from me. Is that correct?

Jersey/Servlet Serverside handling of network failures

-- EDIT: --
To rephrase the question.
Does HTTP know anything about the status of underlying TCP connection?
TCP is a reliable protocol. When the server sends data to the client it expects an acknowledgment signal from that client. What happens in HTTP when the underlying server side TCP connection fails to receive the ACK signal?
-- ORIGINAL Question: --
I am trying to solve a design issue on our HTTP client/server app.
Here is the situation:
The server runs on Tomcat, and we are somewhat limited to using Jersey or Servlets for the server side implementation.
The client requests data from the Server, which once read is deleted.
Data must not be deleted if the client has not received it.
There is no confirmation from the client if the data is received or not.
The client impl cannot be changed in any way.
The network connection is unstable and can be interrupted for long periods of time (e.g. 30 sec.) and also often.
The problem: if the client made a request and shortly after lost connection to the server, the server will not recognize this and it will delete and send the data to the client over the dead connection.
Ideally, we want to get an IOException when flushing the data stream to the client and handle it accordingly:
try (ServletOutputStream outputStream = httpServletResponse.getOutputStream()) {
outputStream.write(bytes);
outputStream.flush();
} catch (Exception e) {
// TODO: do something ...
}
I simulated this locally by killing the client shortly after sending the request or by setting a very low client read timeout value. In both cases I got a server side exception (with bioth Jersey and Servlets).
The last test was sending a request over a network and pulling the network cable in the process.
Unfortunately I did not get the expected result. The server streamed the data back without recognizing the interrupted connection.
So, does anyone have an idea how to force a Server side exception when the connection to the client is broken?
Any other ideas that don't involve using Sockets or confirmation calls from the client?
Thanks in advance!
Instead of deleting the file in real time, you can write a message on a queue in order to delete it later. The delete would have to check a database where you write if the client received the file completely.
I don't think there's a way to know for certain whether the data arrived to the client unless the client sends an acknowledgement message.
The only solution seems to be not actually deleting the data, but keeping it and setting a 'deleted' flag. But since I don't know the particular use case, I'm not sure if this helps...
TCP is a two way protocol.
If you set up an input stream and call InputStream.read(), this should return -1 if the client has disconnected.
More detail here:
Java Sockets: check if client is able to receive message from server

Apache HTTP client timeout

I am using Apache HTTP client to contact an external service. The service can take a few hours, if not longer, to generate its response. I've tried a few different things but have either ended up with socket or read timeouts. I've just tried using the RequestConfig to set the socket and connection timeout to 0 which according to the documentation should be infinite but the request always returns after exactly 1 hour. Any thoughts?
I agree with general sentiments about not trying to keep HTTP connections alive so long, however, if your hands are tied, you may find you are hitting timeouts in TCP and TCP level keep-alives may save the day.
See this link for help setting TCP keep-alive, you cannot do it in HttpClient its an OS thing, this will send ACKs regularly so your TCP connection is never idle even if nothing is going on in the HTTP stream.
Apache HttpClient TCP Keep-Alive (socket keep-alive)
Holding TCP connections for a long time even if they are active is hard. YMMV.
Ideally, any service that takes more then few minutes(2-3 minutes+ or so), should be handled asynchronously, instead keeping connection open for an hour or so long. It is waste of resources both client and server side.
Alternate approaches could be to solve these kind of problems.
You call the service to trigger processing(to prepare response). It may return you some unique request ID.
Then after an hour or so(once response is ready with response), either client request again by passing the request ID, and server returns the Response.
Other alternate approach could be, once response it ready, it pushes back the response to Callback URL or something where Client host another service specifically for receiving the response prepared by the server(step#1).

Why does DefaultHttpClient send data over a half-closed socket?

I'm using DefaultHttpClient with a ThreadSafeClientConnManager on Android (2.3.x) to send HTTP requests to a my REST server (embedded Jetty).
After ~200 seconds of idle time, the server closes the TCP connection with a [FIN]. The Android client responds with an [ACK]. This should and does leave the socket in a half-closed state (server is still listening, but can't send data).
I would expect that when the client tries to use that connection again (via HttpClient.execute), DefaultHttpClient would detect the half-closed state, close the socket on the client side (thus sending it's [FIN/ACK] to finalize the close), and open a new connection for the request. But, there's the rub.
Instead, it sends the new HTTP request over the half-closed socket. Only after sending is the half-closed state detected and the socket closed on the client-side (with the [FIN] sent to the server). Of course, the server can't respond to the request (it had already sent its [FIN]), so the client thinks the request failed and automatically retries via a new socket/connection.
The end result is that server sees and processes two copies of the request.
Any ideas on how to fix this? (My server does the correct thing with the second copy, but I'm annoyed that the payload is transmitted twice.)
Shouldn't DefaultHttpClient detect that the socket was closed when it first tries to write the new HTTP packet, close that socket immediately, and start a new one? I'm baffled as to how a new HTTP request is sent on a socket minutes after the server sent a [FIN].
This is a general limitation of the blocking I/O in Java. There is simply no way of finding out whether or not the opposite endpoint has closed connection other than by attempting to read from the socket. Apache HttpClient works this problem around by employing the so stale connection check which is essentially a very brief read operation. However, the check can and often is disabled. In fact it is often advisable to have it disabled due to extra latency the check introduces. I have no idea how exactly the version of HttpClient shipped with Android behaves in this regard but you could try explicitly enabling the check by using an appropriate config parameter.
A better solution to this problem might be evicting connections from the connection pool that have been idle over a particular period of time (say 150 seconds) after a period of inactivity.
http://hc.apache.org/httpcomponents-client-ga/tutorial/html/connmgmt.html#d5e652

Categories