I am using Apache HTTP client to contact an external service. The service can take a few hours, if not longer, to generate its response. I've tried a few different things but have either ended up with socket or read timeouts. I've just tried using the RequestConfig to set the socket and connection timeout to 0 which according to the documentation should be infinite but the request always returns after exactly 1 hour. Any thoughts?
I agree with general sentiments about not trying to keep HTTP connections alive so long, however, if your hands are tied, you may find you are hitting timeouts in TCP and TCP level keep-alives may save the day.
See this link for help setting TCP keep-alive, you cannot do it in HttpClient its an OS thing, this will send ACKs regularly so your TCP connection is never idle even if nothing is going on in the HTTP stream.
Apache HttpClient TCP Keep-Alive (socket keep-alive)
Holding TCP connections for a long time even if they are active is hard. YMMV.
Ideally, any service that takes more then few minutes(2-3 minutes+ or so), should be handled asynchronously, instead keeping connection open for an hour or so long. It is waste of resources both client and server side.
Alternate approaches could be to solve these kind of problems.
You call the service to trigger processing(to prepare response). It may return you some unique request ID.
Then after an hour or so(once response is ready with response), either client request again by passing the request ID, and server returns the Response.
Other alternate approach could be, once response it ready, it pushes back the response to Callback URL or something where Client host another service specifically for receiving the response prepared by the server(step#1).
Related
I'm exposing a SAOP Webservice via CXF and Apache Camel.
My client wants to NOT RECEIVE a response after 5 sec of the request.
How can I do it technically?
Thanks a lot.
Yes, but you do not need to do it by yourself. A timeout specifies the patience of the caller.
CXF has two timeouts, see the documentation for more information:
connectionTimeout: Specifies the amount of time, in milliseconds, that the consumer will attempt to establish a connection before it times out.
receiveTimeout: Specifies the amount of time, in milliseconds, that the consumer will wait for a response before it times out.
In your case, the CXF client must set a receiveTimeout of 5000 (milliseconds). Perhaps you also want to customize the connectionTimeout. With the receiveTimeout set, the client sends a request against the server and if the server has not startet to send a response within 5 seconds, the client aborts the request.
I think you will get a SocketTimeoutException it the timeout occurs, but not sure about this.
a short explanaition of what i have.
I have a Server and a Client
Client makes GET Request
The stream of the GET Request is used as Push Stream
Server pushes messages to client via this stream in a single thread
The Problem is that when i don't sent data for 30 sec the Client seems to close the Stream automaticly.
I've already set the Timout from 30 sec to LONG.MAX_VALUE with:
stream.setIdleTimeout(Long.MAX_VALUE);
For now I've implemented a "Heartbeat-Workaround" that pushes a simple String every 20sec so i elude the timeout.
I just want to know if this is the only way to do it. Or if I have to change some Settings i didn't found.
Thank you for every answer.
Regards!
Seems you are doing reverse HTTP long-polling, which does require a "heart-beat" to avoid that streams or connections are closed by an idle timeout.
It is normally better to do regular HTTP long polling (i.e. the client sends the heart-beat), because it allows the server to detect disconnected clients much quicker.
However, you are better off using solutions like CometD if you want to perform server-push messaging.
I am encountering an interesting issue wherein a TCP connection for a HTTP 1.1 POST request is being closed immediately following the request (ie, before the response can be sent by the server).
A few details about the test environment:
Client - Windows XP, Internet Explorer 8, Flash player 12.
Server - Java 7
Prior to the aforementioned behaviour, we have several longstanding TCP connections, each being reused for multiple HTTP requests; we open a long poll and when this poll completes, open another. We see several hours of well behaved and reused TCP connections opening polls as the previous poll closes.
Eventually -- sometimes after 12 or more hours of normal behaviour -- a poll on a long standing connection will send the HTTP POST and immediately send a TCP FIN before the server can write the response.
The client behaviour is to keep a poll open at all times, so at this point we try to open a new poll.
A new TCP connection is then opened by the client sending another HTTP POST, with the same behaviour; the request is sent, followed by a FIN from the client.
This behaviour can continue for several minutes, until the server can finally respond to kill the client. (The server detects the initial closed connection by encountering an IO Exception, the next time it can communicate with the client, the response is to tell the client to close)
Edit: We are opening connections only through the Flash client, and are not delving into low level TCP code. While Steffen Ullrich is correct, and the single sided shutdown is possible and should be dealt with, what is not clear is why a single sided shutdown is occurring at this (seemingly arbitrary) point. We are not calling close from the application to instigate this behaviour.
My questions are:
Under what circumstances would a TCP connection for a HTTP request be terminated prior to the response being received? I understand this is bad behaviour, and an incomplete HTTP transaction, so presumably something lower down is terminating the connection for an unknown reason.
Are there any diagnostics that could be used to help understand the problem? (We are currently monitoring server and client side activity with Wireshark.)
Notes:
In Wireshark, the behaviour we see is:
Longstanding TCP connection (#1) serving multiple HTTP requests.
HTTP request is made over #1.
Server ACKs the request.
Client sends FIN to close connection #1. Server responds with FIN,ACK. (The expected traffic would be the server sending the HTTP response). Around this point the server experiences an IO Exception.
Client opens connection #2 and sends HTTP request.
Behaviour continues as from 3.
Sending a request immediatly followed by a FIN is not a connection close, but shutdown of writing shutdown(socket,SHUT_WR). The client tells the server this way that it will not send any more data, but it might still receive data. It's not that uncommon.
I'm using DefaultHttpClient with a ThreadSafeClientConnManager on Android (2.3.x) to send HTTP requests to a my REST server (embedded Jetty).
After ~200 seconds of idle time, the server closes the TCP connection with a [FIN]. The Android client responds with an [ACK]. This should and does leave the socket in a half-closed state (server is still listening, but can't send data).
I would expect that when the client tries to use that connection again (via HttpClient.execute), DefaultHttpClient would detect the half-closed state, close the socket on the client side (thus sending it's [FIN/ACK] to finalize the close), and open a new connection for the request. But, there's the rub.
Instead, it sends the new HTTP request over the half-closed socket. Only after sending is the half-closed state detected and the socket closed on the client-side (with the [FIN] sent to the server). Of course, the server can't respond to the request (it had already sent its [FIN]), so the client thinks the request failed and automatically retries via a new socket/connection.
The end result is that server sees and processes two copies of the request.
Any ideas on how to fix this? (My server does the correct thing with the second copy, but I'm annoyed that the payload is transmitted twice.)
Shouldn't DefaultHttpClient detect that the socket was closed when it first tries to write the new HTTP packet, close that socket immediately, and start a new one? I'm baffled as to how a new HTTP request is sent on a socket minutes after the server sent a [FIN].
This is a general limitation of the blocking I/O in Java. There is simply no way of finding out whether or not the opposite endpoint has closed connection other than by attempting to read from the socket. Apache HttpClient works this problem around by employing the so stale connection check which is essentially a very brief read operation. However, the check can and often is disabled. In fact it is often advisable to have it disabled due to extra latency the check introduces. I have no idea how exactly the version of HttpClient shipped with Android behaves in this regard but you could try explicitly enabling the check by using an appropriate config parameter.
A better solution to this problem might be evicting connections from the connection pool that have been idle over a particular period of time (say 150 seconds) after a period of inactivity.
http://hc.apache.org/httpcomponents-client-ga/tutorial/html/connmgmt.html#d5e652
I currently have a JVM-based network client that does an HTTP long poll (aka comet) request using the standard java.net.HttpURLConnection. I have timeout set very high for the connection (1 hour). For most users it works fine. But some users do not receive the data sent from the server and eventually time out after 1 hour.
My theory is that a (NAT) router is timing out and discarding their connections because they are idle too long before the server sends any data.
My questions then are:
Can I enable TCP keep-alive for the connections used by java.net.HttpURLConnection? I could not find a way to do this.
Is there a different API (than HttpURLConnection) I should be using instead?
Other solutions?
java.net.HttpURLConnection handles Keep-Alive header transparently, it can be controlled and it is on by default. But your problem is not in Keep-Alive, which is a higher level flag indicating that the server should close the connection after handling the first request but rather waiting for the next one.
In your case probably something on the lower level of OSI stack interrupts the connection. Because keeping an open but idle TCP connection for such a long period of time is never a good choice (FTP protocol with two open connections: one for commands and one for data has the same problem), I would rather implement some sort of disconnect/retry fail-safe procedure on the client side.
In fact safe limit would probably be just few minutes, not hours. Simply disconnect from the HTTP server pro-actively every 60 seconds or 5 minutes. Should do the trick.
There does not appear to be a way to turn on TCP keep-alive for HttpURLConnection.
Apache HttpComponents will be an option when version 4.2 comes out with TCP keep-alive support.