How to understand SocketTimeout for http put? - java

From the javadoc of java.net.Socket#setSoTimeout, it says:
Enable/disable SO_TIMEOUT with the specified timeout, in
milliseconds. With this option set to a non-zero timeout,
a read() call on the InputStream associated with this Socket
will block for only this amount of time. If the timeout expires,
a <B>java.net.SocketTimeoutException</B> is raised, though the
Socket is still valid.
For a http put operation, the client may upload a huge file, that the client is always writing, and never reading data from server.
In this case, if I set the SocketTimeout for the http client, will it throw TimeoutException during uploading?

No it won't. It may throw a timeout exception when reading the response code however. If the peer closes the connection during the upload, it will throw an IOException 'connection reset by peer'.

Related

Java HttpRequest timeout unexpectedly throwing HttpConnectTimeoutException

Using the new java.net.http package released with JDK 11, an HttpRequest has been assembled with a deliberately low response timeout:
HttpRequest.Builder builder = HttpRequest.newBuilder(getEndpointUri());
addRequestHeaders(builder);
builder.POST(HttpRequest.BodyPublishers.ofString(rawXml));
builder.timeout(Duration.ofMillis(1));
HttpRequest httpRequest = builder.build();
The aim is to test that HttpTimeoutException outcomes are handled correctly, but unexpectedly this response timeout value is leading to an HttpConnectionTimeoutException, which is being caught by this code:
try {
HttpResponse<InputStream> httpResponse = completableExchange.join();
} catch (CompletionException ce) {
if (ce.getCause() instanceof HttpConnectTimeoutException) {
System.out.println("Connection timeout occurred!");
} else {
throw ce;
}
}
This means that a response timeout is causing the code to act as though a connection timeout has occurred. To the best of my understanding, the connection timeout and response timeout should be separate concepts, which should be possible to catch and handle separately.
The stack trace attached to the HttpConnectionTimeoutException looks like this:
java.net.http.HttpConnectTimeoutException: HTTP connect timed out
at java.net.http/jdk.internal.net.http.ResponseTimerEvent.handle(ResponseTimerEvent.java:68)
at java.net.http/jdk.internal.net.http.HttpClientImpl.purgeTimeoutsAndReturnNextDeadline(HttpClientImpl.java:1248)
at java.net.http/jdk.internal.net.http.HttpClientImpl$SelectorManager.run(HttpClientImpl.java:877)
Caused by: java.net.ConnectException: HTTP connect timed out
at java.net.http/jdk.internal.net.http.ResponseTimerEvent.handle(ResponseTimerEvent.java:69)
... 2 more
Am I misunderstanding the timeout concepts? Does the HttpRequest timeout value simply provide an alternative to the default of the HttpClient timeout value? Is there a reliable way to catch connection and response timeout as distinct events?
For what it's worth, the Javadoc for HttpRequest.Builder.timeout(Duration) says the following:
Sets a timeout for this request. If the response is not received within the specified timeout then an HttpTimeoutException is thrown from HttpClient::send or HttpClient::sendAsync completes exceptionally with an HttpTimeoutException. The effect of not setting a timeout is the same as setting an infinite Duration, ie. block forever.
To make things confusing, HttpConnectionTimeoutException is a subclass of HttpTimeoutException so technically the contract of the timeout(Duration) method is being satisfied. But this seems unhelpful.
(Before you ask: yes, the value passed to HttpRequest.Builder.timeout(Duration) is the deciding factor in whether or not an exception is thrown. So the exception is not based on the connection timeout value being used to create the HttpClient instance.)
IIRC you will get a HttpConnectionTimeoutException if the connection is not connected at the time the timeout is raised, or if the connect timeout is raised before the connection has finished connecting.
When sending a request - the underlying connection might already be connected or not - depending on whether a suitable existing connection was found in the pool. The request timeout starts immediately - independently of the state of the underlying connection. If the underlying connection is not connected yet, and the request timeout expires before it gets connected, then you will get a HttpConnectionTimeoutException because the connection could not be connected within the time allocated for the response to the request to be delivered. You could see it as the request timeout clipping the connect timeout.
Do you have any specific use case in mind for distinguishing the two cases:
HttpConnectionTimeoutException is raised because the connection could not be connected within the time specified by the connection timeout,
HttpConnectionTimeoutException is raised because the request timeout expired before the connection could be connected?

Stop Socket with timeout from waiting after data read from socket

I am trying to create a java http server using tcp sockets. HTTP 1.1 has a timeout value that will enable the connection to be persistent and wait for a short while for possible data from the client. I am trying to implement this timer in my program by using:clientSocket.setSoTimeout(). Even though this will help to leave the connection open for a certain amount of time, but it will wait for that exact amount of time before allowing the next request to be read.
For example:
If timeout is set to 5 seconds,
Request 1 is read. Then the socket hangs and wait until 5 seconds is over.
Request 2 is read. The socket waits until 5 seconds is up again.
This proves to be a problem if my timeout is set to big values. This should not be the case as the request should be processed once it is received and the timeout should only expire only if no data is received throughout the specified duration.
Can anyone advise me on how I could resolve this?
Edit:
For people who face a similar problem, here is my solution:
Since the client waits until the timeout before receiving all the data, I guessed that the client does not know that all the data from the server has been received. Hence, I added a content-length field to the HTTP response packet. Now, my client no longer hangs after receiving the data. The setSoTimeout does indeed work as stated!
Ok, when you receive a connection, then please start a new Thread like this:
class ClientService extends Thread {
private final Socket clientSocket;
public ClientService(Socket clientSocket) {
this.clientSocket=clientSocket;
}
public void run() {
// do your work with the Socket clientSocket here
}
}
this is how then your server code should look like:
while (true) {
Socket clientSocket = server.accept();
new ClientService(clientSocket).start();
}
It will allow you to process responses without waiting for one another till it timeouts.
HTTP 1.1 has a timeout value that will enable the connection to be persistent and wait for a short while for possible data from the client.
Not really. It has a connection: keep-alive setting, which is the default behaviour, and it allows endpoints to close connections that aren't in use after a period of idleness, but it doesn't have a timeout property itself.
I am trying to implement this timer in my program by using:clientSocket.setSoTimeout().
This has nothing whatsoever to do with HTTP. It is a socket read timeout.
Even though this will help to leave the connection open for a certain amount of time, but it will wait for that exact amount of time before allowing the next request to be read.
No it won't. It will cause read methods to throw SocketTimeoutException if no data arrives within the timeout period. Nothing else.
For example:
If timeout is set to 5 seconds,
Request 1 is read. Then the socket hangs and wait until 5 seconds is over.
No it doesn't.
Request 2 is read. The socket waits until 5 seconds is up again.
No it doesn't. You've made all this up. It is fantasy.
This proves to be a problem if my timeout is set to big values.
It isn't a problem with any timeout values whether large or small, because it simply does not happen.
This should not be the case as the request should be processed once it is received and the timeout should only expire only if no data is received throughout the specified duration.
That is exactly what Socket.setSoTimeout() already does.
Your question is founded on a fallacy.

Timer to check activity on socket connection?

We have a socket application which the snippet of the while loop is as below. What we would like to check is that say if it pass 30 seconds and no more data then shut the socket connection. At event if the some data is in then we reset the timer. Must I use the timer or system milliseconds
while ((readChar=readSocket.read()) != -1)
{
//processing.
}
You can configure the socket so that a read operation times out if no data is received within the specified interval.
From the Socket Javadoc:
public void setSoTimeout(int timeout) throws SocketException
Enable/disable SO_TIMEOUT with the specified timeout, in milliseconds. With this option set to a non-zero timeout, a read() call on the InputStream associated with this Socket will block for only this amount of time. If the timeout expires, a java.net.SocketTimeoutException is raised, though the Socket is still valid. The option must be enabled prior to entering the blocking operation to have effect. The timeout must be > 0. A timeout of zero is interpreted as an infinite timeout.
Parameters:
timeout - the specified timeout, in milliseconds.
Throws:
SocketException - if there is an error in the underlying protocol, such as a TCP error.
Since:
JDK 1.1
See Also:
getSoTimeout()
Using this approach, you can read data, consume it (however your need to), and then read from the socket again. If you get the timeout exception, then close the socket.
socket.setSoTimeout(30 * 1000); // timeout after 30 seconds
try
{
while ((readChar=readSocket.read()) != -1) // block reading data ...
{
// processing ...
}
}
catch (SocketTimeoutException e) // we didn't get any data within 30 seconds ...
{
socket.close(); // ... close the socket
}
Use asynchronous NIO operations.
If you use java6, async operations are tricky, but there are many network libraries (Mina, Netty) though they are rather heavy.
If you use java7, true async network operations are implemented and are easy to use (nio2). Even more easier is to use a lightweight nio2 library from https://github.com/rfqu/df4j.

java httpclient detect server disconnect

I'm doing a GET request with the apache HttpClient. Is there a way to detect when the server disconnects while reading from the InputStream?
EOS on the input stream (read() returning -1, readLine() returning null, readXXX() throwing EOFException for any other XXX) is the primary mechanism, otherwise an IOException, typically 'connection reset'. Very rarely you may see a SocketException. If you are using read timeouts, a SocketTimeoutException.
Sure, check out the Exception Handling section of the docs.

Forcing socket.connect to wait a specific time before it decides a connection is unavailable

I'm issuing a socket connection, using the following snippet
Socket socket = new Socket();
InetSocketAddress endPoint = new InetSocketAddress("localhost", 1234);
try
{
socket.connect(endPoint, 30000);
}
catch (IOException e)
{
e.printStackTrace();
// Logging
}
The endpoint it is trying to connect to is offline, what I want it to do is to attempt to connect, and using the 30000ms timeout, wait for that period of time before it concludes a result
Currently, that 30000 parameter doesn't seem to be applied, as from the timestamp on my logging it appears that it is determining within 1 second that a connection failed.
How can I force the connect to wait for a set amount of time before giving up?
13:13:57,685 6235 DEBUG [Thread-7] - Unable to connect to [localhost:1234]
13:13:58,685 7235 DEBUG [Thread-7] - Unable to connect to [localhost:1234]
13:13:59,695 8245 DEBUG [Thread-7] - Unable to connect to [localhost:1234]
13:14:00,695 9245 DEBUG [Thread-7] - Unable to connect to [localhost:1234]
EDIT : The API does state Connects this socket to the server with a specified timeout value. A timeout of zero is interpreted as an infinite timeout. The connection will then block until established or an error occurs. however it appears I'm not experiencing such behaviour, or am not catering to it, most likely the latter
What you're getting here is correct. connect won't sit on a socket waiting until it sees a server, it will attempt to connect and wait for a response. if there is nothing to connect to, it returns. if there is something to connect to, it will wait timeout seconds for a response and fail if none is received.
You need to distinguish among several possible exception conditions.
ConnectException with the text 'connection refused', which means the host was up and reachable and nothing was listening at the port. This happens very quickly and cannot be subjected to a timeout.
NoRouteToHostException: this indicates a connectivity issue. Again it happens immediately and cannot be subjected to a timeout.
UnknownHostException: the host names cannot be resolved via DNS. This happens immediately, or rather after a generally short DNS delay, and cannot be subjected to a timeout.
ConnectException with any other text: this can indicate a failure to respond by the target system. Usually happens when firewalls are present. Can be subjected to a timeout.
You are doing the correct thing by calling Socket.connect() with a timeout parameter. If you don't do this, or if you specify a zero timeout, the default system timeout is used, which is of the order of 60-75 seconds depending on the platform. This is contrary to the Javadoc's statement about an 'infinite timeout', which is not correct. Also you cannot increase the timeout beyond this limit via Socket.connect() witha a timeout parameter. Alternatively you can use java.nio socket channels in non-blocking mode with a select() to administer the timeout for you, but you still can't increase the timeout beyond the platform default via this or any other method.
When the timeout occurs, a SocketTimeoutException exception is thrown which you do not catch and log. The IOException is fired when "an error occurs during the connection". The timeout is never applied because there's an error beforehand.
Edit: Just to clarify: TCP/IP as a suite has many specifics that could prevent a packet from reaching it's desired outcome (a SYN/ACK packet). If a computer responds to your SYN packet by an informing your application that the port is closed (i.e. there's no application running/listening there), it would fire an exception telling you that it is impossible to connect to that port. If you wish to send and re-send SYN packets either way with the knowledge that an application will come online listening on that port, this is done on a different network layer (and, as far as I know, is not accessible with Java out-of-the-box).
Try scocket.setSoTimeout(timeout) before connecting.

Categories