Default httpclient for resttemplate, have absolute value for read timeout - java

I have a problem where a application keeps blocking indefinitely on a post call made with a RestTemplate from Spring Boot.
ResponseEntity<String> response = restTemplate.postForEntity(destination.getUri(), request, String.class);
We use the default standard JDK implementation and create it like this:
this.restTemplate = restTemplateBuilder
.setConnectTimeout(5000)
.setReadTimeout(5000)
.build();
Which sets the connection and read timeout to 5 seconds. But it seems this is not an absolute value, as soon as our application receives some bytes this read timeout resets and this causes our application to wait indefinitely.
I rather have an absolute read timeout where if you don't get the end response in less than 5 seconds the template throws an TimeoutException.
I couldn't find something like this in the options for the default client?
---EDIT---
I tried out #Peekay answer but it doesn't seem to work:
CloseableHttpClient httpClient = HttpClientBuilder.create()
.setConnectionTimeToLive(1, TimeUnit.SECONDS)
.setConnectionManager(new PoolingHttpClientConnectionManager())
.build();
HttpComponentsClientHttpRequestFactory clientHttpRequestFactory = new HttpComponentsClientHttpRequestFactory();
clientHttpRequestFactory.setHttpClient(httpClient);
return new RestTemplate(clientHttpRequestFactory);
I have also tried different implementations of the client's RestTemplate e.g. HttpComponentsClientHttp, Netty4Client and OkHttp3Client created them like so:
Netty4ClientHttpRequestFactory factory = new Netty4ClientHttpRequestFactory();
factory.setConnectTimeout(timeout);
factory.setReadTimeout(readTimeout);
return new RestTemplate(factory);
And tested them on a response that took longer than 5 seconds to respond. All of them except for Netty, which returned a ReadTimeoutException, had returned a 200 success. Unfortunately I cannot switch to that client, it seems you need to implement it yourself if you want to keep using the default client.

You are right, you cannot setup absolute value and you have to interrupt the tread itself.

The way we fixed this was by wrapping the RestTemplate REST call in a CompletableFuture and use the timeout functionality from that wrapper to kill the thread if it takes too long.
Here is an example:
CompletableFuture<T> requestWrapper = CompletableFuture.supplyAsync(() -> {
return restTemplate.postForEntity(/* Whatever arguments you need to pass */);
});
try {
return requestWrapper.get(5000, TimeUnit.MILLISECONDS);
} catch (TimeoutException e) {
requestWrapper.cancel(true);
throw new TimeoutException("Endpoint took too long to respond, TimeoutException is triggered");
} catch (ExecutionException e) {
throw e.getCause();
}

You can use alternate http clients with RestTemplate, such as the Apache HttpClient which gives you more control over how the connections are setup, pooled, and maintained:
From its HttpClientBuilder you can set a Connection Time-to-Live which is the max TTL for the connection
You can define a RequestConfig specifying a connect timeout (max time to wait for a connection to be established) and a separate socket timeout (max time a read() will wait for data).
For more details see: setConnectTimeout vs. setConnectionTimeToLive vs. setSocketTimeout()

-Dsun.net.client.defaultConnectTimeout=<TimeoutInMiliSec>
-Dsun.net.client.defaultReadTimeout=<TimeoutInMiliSec>
https://howtodoinjava.com/spring-boot2/resttemplate/resttemplate-timeout-example/

Related

Why adding socket read timeout doesn't help for socketread0 [duplicate]

Performing millions of HTTP requests with different Java libraries gives me threads hanged on:
java.net.SocketInputStream.socketRead0()
Which is native function.
I tried to set up Apche Http Client and RequestConfig to have timeouts on (I hope) everythig that is possible but still, I have (probably infinite) hangs on socketRead0. How to get rid of them?
Hung ratio is about ~1 per 10000 requests (to 10000 different hosts) and it can last probably forever (I've confirmed thread hung as still valid after 10 hours).
JDK 1.8 on Windows 7.
My HttpClient factory:
SocketConfig socketConfig = SocketConfig.custom()
.setSoKeepAlive(false)
.setSoLinger(1)
.setSoReuseAddress(true)
.setSoTimeout(5000)
.setTcpNoDelay(true).build();
HttpClientBuilder builder = HttpClientBuilder.create();
builder.disableAutomaticRetries();
builder.disableContentCompression();
builder.disableCookieManagement();
builder.disableRedirectHandling();
builder.setConnectionReuseStrategy(new NoConnectionReuseStrategy());
builder.setDefaultSocketConfig(socketConfig);
return HttpClientBuilder.create().build();
My RequestConfig factory:
HttpGet request = new HttpGet(url);
RequestConfig config = RequestConfig.custom()
.setCircularRedirectsAllowed(false)
.setConnectionRequestTimeout(8000)
.setConnectTimeout(4000)
.setMaxRedirects(1)
.setRedirectsEnabled(true)
.setSocketTimeout(5000)
.setStaleConnectionCheckEnabled(true).build();
request.setConfig(config);
return new HttpGet(url);
OpenJDK socketRead0 source
Note: Actually I have some "trick" - I can schedule .getConnectionManager().shutdown() in other Thread with cancellation of Future if request finished properly, but it is depracated and also it kills whole HttpClient, not only that single request.
Though this question mentions Windows, I have the same problem on Linux. It appears there is a flaw in the way the JVM implements blocking socket timeouts:
https://bugs.openjdk.java.net/browse/JDK-8049846
https://bugs.openjdk.java.net/browse/JDK-8075484
To summarize, timeout for blocking sockets is implemented by calling poll on Linux (and select on Windows) to determine that data is available before calling recv. However, at least on Linux, both methods can spuriously indicate that data is available when it is not, leading to recv blocking indefinitely.
From poll(2) man page BUGS section:
See the discussion of spurious readiness notifications under the BUGS section of select(2).
From select(2) man page BUGS section:
Under Linux, select() may report a socket file descriptor as "ready
for reading", while nevertheless a subsequent read blocks. This could
for example happen when data has arrived but upon examination has
wrong checksum and is discarded. There may be other circumstances
in which a file descriptor is spuriously reported as ready. Thus it
may be safer to use O_NONBLOCK on sockets that should not block.
The Apache HTTP Client code is a bit hard to follow, but it appears that connection expiration is only set for HTTP keep-alive connections (which you've disabled) and is indefinite unless the server specifies otherwise. Therefore, as pointed out by oleg, the Connection eviction policy approach won't work in your case and can't be relied upon in general.
As Clint said, you should consider a Non-blocking HTTP client, or (seeing that you are using the Apache Httpclient) implement a Multithreaded request execution to prevent possible hangs of the main application thread (this not solve the problem but is better than restart your app because is freezed). Anyway, you set the setStaleConnectionCheckEnabled property but the stale connection check is not 100% reliable, from the Apache Httpclient tutorial:
One of the major shortcomings of the classic blocking I/O model is
that the network socket can react to I/O events only when blocked in
an I/O operation. When a connection is released back to the manager,
it can be kept alive however it is unable to monitor the status of the
socket and react to any I/O events. If the connection gets closed on
the server side, the client side connection is unable to detect the
change in the connection state (and react appropriately by closing the
socket on its end).
HttpClient tries to mitigate the problem by testing whether the
connection is 'stale', that is no longer valid because it was closed
on the server side, prior to using the connection for executing an
HTTP request. The stale connection check is not 100% reliable and adds
10 to 30 ms overhead to each request execution.
The Apache HttpComponents crew recommends the implementation of a Connection eviction policy
The only feasible solution that does not involve a one thread per
socket model for idle connections is a dedicated monitor thread used
to evict connections that are considered expired due to a long period
of inactivity. The monitor thread can periodically call
ClientConnectionManager#closeExpiredConnections() method to close all
expired connections and evict closed connections from the pool. It can
also optionally call ClientConnectionManager#closeIdleConnections()
method to close all connections that have been idle over a given
period of time.
Take a look at the sample code of the Connection eviction policy section and try to implement it in your application along with the Multithread request execution, I think the implementation of both mechanisms will prevent your undesired hangs.
You should consider a Non-blocking HTTP client like Grizzly or Netty which do not have blocking operations to hang a thread.
I have more than 50 machines that make about 200k requests/day/machine. They are running Amazon Linux AMI 2017.03. I previously had jdk1.8.0_102, now I have jdk1.8.0_131. I am using both apacheHttpClient and OKHttp as scraping libraries.
Each machine was running 50 threads, and sometimes, the threads get lost. After profiling with Youkit java profiler I got
ScraperThread42 State: RUNNABLE CPU usage on sample: 0ms
java.net.SocketInputStream.socketRead0(FileDescriptor, byte[], int, int, int) SocketInputStream.java (native)
java.net.SocketInputStream.socketRead(FileDescriptor, byte[], int, int, int) SocketInputStream.java:116
java.net.SocketInputStream.read(byte[], int, int, int) SocketInputStream.java:171
java.net.SocketInputStream.read(byte[], int, int) SocketInputStream.java:141
okio.Okio$2.read(Buffer, long) Okio.java:139
okio.AsyncTimeout$2.read(Buffer, long) AsyncTimeout.java:211
okio.RealBufferedSource.indexOf(byte, long) RealBufferedSource.java:306
okio.RealBufferedSource.indexOf(byte) RealBufferedSource.java:300
okio.RealBufferedSource.readUtf8LineStrict() RealBufferedSource.java:196
okhttp3.internal.http1.Http1Codec.readResponse() Http1Codec.java:191
okhttp3.internal.connection.RealConnection.createTunnel(int, int, Request, HttpUrl) RealConnection.java:303
okhttp3.internal.connection.RealConnection.buildTunneledConnection(int, int, int, ConnectionSpecSelector) RealConnection.java:156
okhttp3.internal.connection.RealConnection.connect(int, int, int, List, boolean) RealConnection.java:112
okhttp3.internal.connection.StreamAllocation.findConnection(int, int, int, boolean) StreamAllocation.java:193
okhttp3.internal.connection.StreamAllocation.findHealthyConnection(int, int, int, boolean, boolean) StreamAllocation.java:129
okhttp3.internal.connection.StreamAllocation.newStream(OkHttpClient, boolean) StreamAllocation.java:98
okhttp3.internal.connection.ConnectInterceptor.intercept(Interceptor$Chain) ConnectInterceptor.java:42
okhttp3.internal.http.RealInterceptorChain.proceed(Request, StreamAllocation, HttpCodec, Connection) RealInterceptorChain.java:92
okhttp3.internal.http.RealInterceptorChain.proceed(Request) RealInterceptorChain.java:67
okhttp3.internal.http.BridgeInterceptor.intercept(Interceptor$Chain) BridgeInterceptor.java:93
okhttp3.internal.http.RealInterceptorChain.proceed(Request, StreamAllocation, HttpCodec, Connection) RealInterceptorChain.java:92
okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(Interceptor$Chain) RetryAndFollowUpInterceptor.java:124
okhttp3.internal.http.RealInterceptorChain.proceed(Request, StreamAllocation, HttpCodec, Connection) RealInterceptorChain.java:92
okhttp3.internal.http.RealInterceptorChain.proceed(Request) RealInterceptorChain.java:67
okhttp3.RealCall.getResponseWithInterceptorChain() RealCall.java:198
okhttp3.RealCall.execute() RealCall.java:83
I found out that they have a fix for this
https://bugs.openjdk.java.net/browse/JDK-8172578
in JDK 8u152 (early access). I have installed it on one of our machines. Now I am waiting to see some good results.
Given no one else responded so far, here is my take
Your timeout setting looks perfectly OK to me. The reason why certain requests appear to be constantly blocked in a java.net.SocketInputStream#socketRead0() call is likely to be due to a combination of misbehaving servers and your local configuration. Socket timeout defines a maximum period of inactivity between two consecutive i/o read operations (or in other words two consecutive incoming packets). Your socket timeout setting is 5,000 milliseconds. As long as the opposite endpoint keeps on sending a packet every 4,999 milliseconds for a chunk encoded message the request will never time out and will end up sending most of its time blocked in java.net.SocketInputStream#socketRead0(). You can find out whether or not this is the case by running HttpClient with wire logging turned on.
For Apache HTTP Client (blocking) I found best solution is to getConnectionManager(). and shutdown it.
So in high-reliability solution I just schedule shutdown in other thread and in case request does not complete I'm shutting in down from other thread
I bumped into the same issue using apache common http client.
There's a pretty simple workaround (which doesn't require shutting the connection manager down):
In order to reproduce it, one needs to execute the request from the question in a new thread paying attention to details:
run request in separate thread, close request and release it's connection in a different thread, interrupt hanging thread
don't run EntityUtils.consumeQuietly(response.getEntity()) in finally block (because it hangs on 'dead' connection)
First, add the interface
interface RequestDisposer {
void dispose();
}
Execute an HTTP request in a new thread
final AtomicReference<RequestDisposer> requestDisposer = new AtomicReference<>(null);
final Thread thread = new Thread(() -> {
final HttpGet request = new HttpGet("http://my.url");
final RequestDisposer disposer = () -> {
request.abort();
request.releaseConnection();
};
requestDiposer.set(disposer);
try (final CloseableHttpResponse response = httpClient.execute(request))) {
...
} finally {
disposer.dispose();
}
};)
thread.start()
Call dispose() in the main thread to close hanging connection
requestDisposer.get().dispose(); // better check if it's not null first
thread.interrupt();
thread.join();
That fixed the issue for me.
My stacktrace looked like this:
java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:139)
at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:155)
at org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:284)
at org.apache.http.impl.io.ChunkedInputStream.getChunkSize(ChunkedInputStream.java:253)
at org.apache.http.impl.io.ChunkedInputStream.nextChunk(ChunkedInputStream.java:227)
at org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:186)
at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:137)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
To whom it might be interesting, it easily reproducable, interrupt the thread without aborting request and releasing connection (ratio is about 1/100).
Windows 10, version 10.0.
jdk8.151-x64.
I feel that all these answers are way too specific.
We have to note that this is probably a real JVM bug. It should be possible to get the file descriptor and close it. All this timeout-talk is too high level. You do not want a timeout to the extent that the connection fails, what you want is an ability to hard break this stuck thread and stop or interrupt it.
The way the JVM should implemented the SocketInputStream.socketRead function is to set some internal default timeout, which should be even as low as 1 second. Then when the timeout comes, immediately looping back to the socketRead0. While that is happening, the Thread.interrupt and Thread.stop commands can take effect.
The even better way of doing this of course is not to do any blocking wait at all, but instead use a the select(2) system call with a list of file descriptors and when any one has data available, let it perform the read operation.
Just look all over the internet all these people having trouble with threads stuck in java.net.SocketInputStream#socketRead0, it's the most popular topic about java.net.SocketInputStream hands down!
So, while the bug is not fixed, I wonder about the most dirty trick I can come up with to break up this situation. Something like connecting with the debugger interface to get to the stack frame of the socketRead call and grab the FileDescriptor and then break into that to get the int fd number and then make a native close(2) call on that fd.
Do we have a chance to do that? (Don't tell me "it's not good practice") -- if so, let's do it!
I faced the same issue today. Based on #Sergei Voitovich I've tried to make it work still using Apache Http Client.
Since I am using Java 8 its simpler to make a timeout to abort the connection.
Here's is a draft of the implementation:
private HttpResponse executeRequest(Request request){
InterruptibleRequestExecution requestExecution = new InterruptibleRequestExecution(request, executor);
ExecutorService executorService = Executors.newSingleThreadExecutor();
try {
return executorService.submit(requestExecution).get(<your timeout in milliseconds>, TimeUnit.MILLISECONDS);
} catch (TimeoutException | ExecutionException e) {
// Your request timed out, you can throw an exception here if you want
throw new UsefulExceptionForYourApplication(e);
} catch (InterruptedException e) {
// Always remember to call interrupt after catching InterruptedException
Thread.currentThread().interrupt();
throw new UsefulExceptionForYourApplication(e);
} finally {
// This method forces to stop the Thread Pool (with single thread) created by Executors.newSingleThreadExecutor() and makes the pending request to abort inside the thread. So if the request is hanging in socketRead0 it will stop and also the thread will be terminated
forceStopIdleThreadsAndRequests(requestExecution, executorService);
}
}
private void forceStopIdleThreadsAndRequests(InterruptibleRequestExecution execution,
ExecutorService executorService) {
execution.abortRequest();
executorService.shutdownNow();
}
The code above will create a new Thread to execute the request using org.apache.http.client.fluent.Executor. Timeout can be easily configured.
The execution of the thread is defined in InterruptibleRequestExecution which you can see below.
private static class InterruptibleRequestExecution implements Callable<HttpResponse> {
private final Request request;
private final Executor executor;
private final RequestDisposer disposer;
public InterruptibleRequestExecution(Request request, Executor executor) {
this.request = request;
this.executor = executor;
this.disposer = request::abort;
}
#Override
public HttpResponse call() {
try {
return executor.execute(request).returnResponse();
} catch (IOException e) {
throw new UsefulExceptionForYourApplication(e);
} finally {
disposer.dispose();
}
}
public void abortRequest() {
disposer.dispose();
}
#FunctionalInterface
interface RequestDisposer {
void dispose();
}
}
The results are really good. We've had times where some connections where hanging in sockedRead0 for 7 hours! Now, it never passes the defined timeout and its working in production with millions of requests per day without having any problems.

What does setDefaultMaxPerRoute and setMaxTotal mean in HttpClient?

I am using Apache HttpClient in one of my project. I am also using PoolingHttpClientConnectionManager along with my HttpClient as well.
I am confuse what are these properties mean. I tried going through documentation in the code but I don't see any documentation around these variables so was not able to understand.
setMaxTotal
setDefaultMaxPerRoute
setConnectTimeout
setSocketTimeout
setConnectionRequestTimeout
setStaleConnectionCheckEnabled
Below is how I am using in my code:
RequestConfig requestConfig = RequestConfig.custom().setConnectTimeout(5 * 1000).setSocketTimeout(5 * 1000)
.setStaleConnectionCheckEnabled(false).build();
PoolingHttpClientConnectionManager poolingHttpClientConnectionManager = new PoolingHttpClientConnectionManager();
poolingHttpClientConnectionManager.setMaxTotal(200);
poolingHttpClientConnectionManager.setDefaultMaxPerRoute(20);
CloseableHttpClient httpClientBuilder = HttpClientBuilder.create()
.setConnectionManager(poolingHttpClientConnectionManager).setDefaultRequestConfig(requestConfig)
.build();
Can anyone explain me these properties so that I can understand and decide what values I should put in there. Also, are there any other properties that I should use apart from as shown above to get better performance?
I am using http-client 4.3.1
Some parameters are explained at http://hc.apache.org/httpclient-3.x/preference-api.html
Others must be gleaned from the source.
setMaxTotal
The maximum number of connections allowed across all routes.
setDefaultMaxPerRoute
The maximum number of connections allowed for a route that has not been specified otherwise by a call to setMaxPerRoute. Use setMaxPerRoute when you know the route ahead of time and setDefaultMaxPerRoute when you do not.
setConnectTimeout
How long to wait for a connection to be established with the remote server before throwing a timeout exception.
setSocketTimeout
How long to wait for the server to respond to various calls before throwing a timeout exception. See http://docs.oracle.com/javase/1.5.0/docs/api/java/net/SocketOptions.html#SO_TIMEOUT for details.
setConnectionRequestTimeout
How long to wait when trying to checkout a connection from the connection pool before throwing an exception (the connection pool won't return immediately if, for example, all the connections are checked out).
setStaleConnectionCheckEnabled
Can be disabled for a slight performance improvement at the cost of potential IOExceptions. See http://hc.apache.org/httpclient-3.x/performance.html#Stale_connection_check

How to prevent hangs on SocketInputStream.socketRead0 in Java?

Performing millions of HTTP requests with different Java libraries gives me threads hanged on:
java.net.SocketInputStream.socketRead0()
Which is native function.
I tried to set up Apche Http Client and RequestConfig to have timeouts on (I hope) everythig that is possible but still, I have (probably infinite) hangs on socketRead0. How to get rid of them?
Hung ratio is about ~1 per 10000 requests (to 10000 different hosts) and it can last probably forever (I've confirmed thread hung as still valid after 10 hours).
JDK 1.8 on Windows 7.
My HttpClient factory:
SocketConfig socketConfig = SocketConfig.custom()
.setSoKeepAlive(false)
.setSoLinger(1)
.setSoReuseAddress(true)
.setSoTimeout(5000)
.setTcpNoDelay(true).build();
HttpClientBuilder builder = HttpClientBuilder.create();
builder.disableAutomaticRetries();
builder.disableContentCompression();
builder.disableCookieManagement();
builder.disableRedirectHandling();
builder.setConnectionReuseStrategy(new NoConnectionReuseStrategy());
builder.setDefaultSocketConfig(socketConfig);
return HttpClientBuilder.create().build();
My RequestConfig factory:
HttpGet request = new HttpGet(url);
RequestConfig config = RequestConfig.custom()
.setCircularRedirectsAllowed(false)
.setConnectionRequestTimeout(8000)
.setConnectTimeout(4000)
.setMaxRedirects(1)
.setRedirectsEnabled(true)
.setSocketTimeout(5000)
.setStaleConnectionCheckEnabled(true).build();
request.setConfig(config);
return new HttpGet(url);
OpenJDK socketRead0 source
Note: Actually I have some "trick" - I can schedule .getConnectionManager().shutdown() in other Thread with cancellation of Future if request finished properly, but it is depracated and also it kills whole HttpClient, not only that single request.
Though this question mentions Windows, I have the same problem on Linux. It appears there is a flaw in the way the JVM implements blocking socket timeouts:
https://bugs.openjdk.java.net/browse/JDK-8049846
https://bugs.openjdk.java.net/browse/JDK-8075484
To summarize, timeout for blocking sockets is implemented by calling poll on Linux (and select on Windows) to determine that data is available before calling recv. However, at least on Linux, both methods can spuriously indicate that data is available when it is not, leading to recv blocking indefinitely.
From poll(2) man page BUGS section:
See the discussion of spurious readiness notifications under the BUGS section of select(2).
From select(2) man page BUGS section:
Under Linux, select() may report a socket file descriptor as "ready
for reading", while nevertheless a subsequent read blocks. This could
for example happen when data has arrived but upon examination has
wrong checksum and is discarded. There may be other circumstances
in which a file descriptor is spuriously reported as ready. Thus it
may be safer to use O_NONBLOCK on sockets that should not block.
The Apache HTTP Client code is a bit hard to follow, but it appears that connection expiration is only set for HTTP keep-alive connections (which you've disabled) and is indefinite unless the server specifies otherwise. Therefore, as pointed out by oleg, the Connection eviction policy approach won't work in your case and can't be relied upon in general.
As Clint said, you should consider a Non-blocking HTTP client, or (seeing that you are using the Apache Httpclient) implement a Multithreaded request execution to prevent possible hangs of the main application thread (this not solve the problem but is better than restart your app because is freezed). Anyway, you set the setStaleConnectionCheckEnabled property but the stale connection check is not 100% reliable, from the Apache Httpclient tutorial:
One of the major shortcomings of the classic blocking I/O model is
that the network socket can react to I/O events only when blocked in
an I/O operation. When a connection is released back to the manager,
it can be kept alive however it is unable to monitor the status of the
socket and react to any I/O events. If the connection gets closed on
the server side, the client side connection is unable to detect the
change in the connection state (and react appropriately by closing the
socket on its end).
HttpClient tries to mitigate the problem by testing whether the
connection is 'stale', that is no longer valid because it was closed
on the server side, prior to using the connection for executing an
HTTP request. The stale connection check is not 100% reliable and adds
10 to 30 ms overhead to each request execution.
The Apache HttpComponents crew recommends the implementation of a Connection eviction policy
The only feasible solution that does not involve a one thread per
socket model for idle connections is a dedicated monitor thread used
to evict connections that are considered expired due to a long period
of inactivity. The monitor thread can periodically call
ClientConnectionManager#closeExpiredConnections() method to close all
expired connections and evict closed connections from the pool. It can
also optionally call ClientConnectionManager#closeIdleConnections()
method to close all connections that have been idle over a given
period of time.
Take a look at the sample code of the Connection eviction policy section and try to implement it in your application along with the Multithread request execution, I think the implementation of both mechanisms will prevent your undesired hangs.
You should consider a Non-blocking HTTP client like Grizzly or Netty which do not have blocking operations to hang a thread.
I have more than 50 machines that make about 200k requests/day/machine. They are running Amazon Linux AMI 2017.03. I previously had jdk1.8.0_102, now I have jdk1.8.0_131. I am using both apacheHttpClient and OKHttp as scraping libraries.
Each machine was running 50 threads, and sometimes, the threads get lost. After profiling with Youkit java profiler I got
ScraperThread42 State: RUNNABLE CPU usage on sample: 0ms
java.net.SocketInputStream.socketRead0(FileDescriptor, byte[], int, int, int) SocketInputStream.java (native)
java.net.SocketInputStream.socketRead(FileDescriptor, byte[], int, int, int) SocketInputStream.java:116
java.net.SocketInputStream.read(byte[], int, int, int) SocketInputStream.java:171
java.net.SocketInputStream.read(byte[], int, int) SocketInputStream.java:141
okio.Okio$2.read(Buffer, long) Okio.java:139
okio.AsyncTimeout$2.read(Buffer, long) AsyncTimeout.java:211
okio.RealBufferedSource.indexOf(byte, long) RealBufferedSource.java:306
okio.RealBufferedSource.indexOf(byte) RealBufferedSource.java:300
okio.RealBufferedSource.readUtf8LineStrict() RealBufferedSource.java:196
okhttp3.internal.http1.Http1Codec.readResponse() Http1Codec.java:191
okhttp3.internal.connection.RealConnection.createTunnel(int, int, Request, HttpUrl) RealConnection.java:303
okhttp3.internal.connection.RealConnection.buildTunneledConnection(int, int, int, ConnectionSpecSelector) RealConnection.java:156
okhttp3.internal.connection.RealConnection.connect(int, int, int, List, boolean) RealConnection.java:112
okhttp3.internal.connection.StreamAllocation.findConnection(int, int, int, boolean) StreamAllocation.java:193
okhttp3.internal.connection.StreamAllocation.findHealthyConnection(int, int, int, boolean, boolean) StreamAllocation.java:129
okhttp3.internal.connection.StreamAllocation.newStream(OkHttpClient, boolean) StreamAllocation.java:98
okhttp3.internal.connection.ConnectInterceptor.intercept(Interceptor$Chain) ConnectInterceptor.java:42
okhttp3.internal.http.RealInterceptorChain.proceed(Request, StreamAllocation, HttpCodec, Connection) RealInterceptorChain.java:92
okhttp3.internal.http.RealInterceptorChain.proceed(Request) RealInterceptorChain.java:67
okhttp3.internal.http.BridgeInterceptor.intercept(Interceptor$Chain) BridgeInterceptor.java:93
okhttp3.internal.http.RealInterceptorChain.proceed(Request, StreamAllocation, HttpCodec, Connection) RealInterceptorChain.java:92
okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(Interceptor$Chain) RetryAndFollowUpInterceptor.java:124
okhttp3.internal.http.RealInterceptorChain.proceed(Request, StreamAllocation, HttpCodec, Connection) RealInterceptorChain.java:92
okhttp3.internal.http.RealInterceptorChain.proceed(Request) RealInterceptorChain.java:67
okhttp3.RealCall.getResponseWithInterceptorChain() RealCall.java:198
okhttp3.RealCall.execute() RealCall.java:83
I found out that they have a fix for this
https://bugs.openjdk.java.net/browse/JDK-8172578
in JDK 8u152 (early access). I have installed it on one of our machines. Now I am waiting to see some good results.
Given no one else responded so far, here is my take
Your timeout setting looks perfectly OK to me. The reason why certain requests appear to be constantly blocked in a java.net.SocketInputStream#socketRead0() call is likely to be due to a combination of misbehaving servers and your local configuration. Socket timeout defines a maximum period of inactivity between two consecutive i/o read operations (or in other words two consecutive incoming packets). Your socket timeout setting is 5,000 milliseconds. As long as the opposite endpoint keeps on sending a packet every 4,999 milliseconds for a chunk encoded message the request will never time out and will end up sending most of its time blocked in java.net.SocketInputStream#socketRead0(). You can find out whether or not this is the case by running HttpClient with wire logging turned on.
For Apache HTTP Client (blocking) I found best solution is to getConnectionManager(). and shutdown it.
So in high-reliability solution I just schedule shutdown in other thread and in case request does not complete I'm shutting in down from other thread
I bumped into the same issue using apache common http client.
There's a pretty simple workaround (which doesn't require shutting the connection manager down):
In order to reproduce it, one needs to execute the request from the question in a new thread paying attention to details:
run request in separate thread, close request and release it's connection in a different thread, interrupt hanging thread
don't run EntityUtils.consumeQuietly(response.getEntity()) in finally block (because it hangs on 'dead' connection)
First, add the interface
interface RequestDisposer {
void dispose();
}
Execute an HTTP request in a new thread
final AtomicReference<RequestDisposer> requestDisposer = new AtomicReference<>(null);
final Thread thread = new Thread(() -> {
final HttpGet request = new HttpGet("http://my.url");
final RequestDisposer disposer = () -> {
request.abort();
request.releaseConnection();
};
requestDiposer.set(disposer);
try (final CloseableHttpResponse response = httpClient.execute(request))) {
...
} finally {
disposer.dispose();
}
};)
thread.start()
Call dispose() in the main thread to close hanging connection
requestDisposer.get().dispose(); // better check if it's not null first
thread.interrupt();
thread.join();
That fixed the issue for me.
My stacktrace looked like this:
java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:139)
at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:155)
at org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:284)
at org.apache.http.impl.io.ChunkedInputStream.getChunkSize(ChunkedInputStream.java:253)
at org.apache.http.impl.io.ChunkedInputStream.nextChunk(ChunkedInputStream.java:227)
at org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:186)
at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:137)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
To whom it might be interesting, it easily reproducable, interrupt the thread without aborting request and releasing connection (ratio is about 1/100).
Windows 10, version 10.0.
jdk8.151-x64.
I feel that all these answers are way too specific.
We have to note that this is probably a real JVM bug. It should be possible to get the file descriptor and close it. All this timeout-talk is too high level. You do not want a timeout to the extent that the connection fails, what you want is an ability to hard break this stuck thread and stop or interrupt it.
The way the JVM should implemented the SocketInputStream.socketRead function is to set some internal default timeout, which should be even as low as 1 second. Then when the timeout comes, immediately looping back to the socketRead0. While that is happening, the Thread.interrupt and Thread.stop commands can take effect.
The even better way of doing this of course is not to do any blocking wait at all, but instead use a the select(2) system call with a list of file descriptors and when any one has data available, let it perform the read operation.
Just look all over the internet all these people having trouble with threads stuck in java.net.SocketInputStream#socketRead0, it's the most popular topic about java.net.SocketInputStream hands down!
So, while the bug is not fixed, I wonder about the most dirty trick I can come up with to break up this situation. Something like connecting with the debugger interface to get to the stack frame of the socketRead call and grab the FileDescriptor and then break into that to get the int fd number and then make a native close(2) call on that fd.
Do we have a chance to do that? (Don't tell me "it's not good practice") -- if so, let's do it!
I faced the same issue today. Based on #Sergei Voitovich I've tried to make it work still using Apache Http Client.
Since I am using Java 8 its simpler to make a timeout to abort the connection.
Here's is a draft of the implementation:
private HttpResponse executeRequest(Request request){
InterruptibleRequestExecution requestExecution = new InterruptibleRequestExecution(request, executor);
ExecutorService executorService = Executors.newSingleThreadExecutor();
try {
return executorService.submit(requestExecution).get(<your timeout in milliseconds>, TimeUnit.MILLISECONDS);
} catch (TimeoutException | ExecutionException e) {
// Your request timed out, you can throw an exception here if you want
throw new UsefulExceptionForYourApplication(e);
} catch (InterruptedException e) {
// Always remember to call interrupt after catching InterruptedException
Thread.currentThread().interrupt();
throw new UsefulExceptionForYourApplication(e);
} finally {
// This method forces to stop the Thread Pool (with single thread) created by Executors.newSingleThreadExecutor() and makes the pending request to abort inside the thread. So if the request is hanging in socketRead0 it will stop and also the thread will be terminated
forceStopIdleThreadsAndRequests(requestExecution, executorService);
}
}
private void forceStopIdleThreadsAndRequests(InterruptibleRequestExecution execution,
ExecutorService executorService) {
execution.abortRequest();
executorService.shutdownNow();
}
The code above will create a new Thread to execute the request using org.apache.http.client.fluent.Executor. Timeout can be easily configured.
The execution of the thread is defined in InterruptibleRequestExecution which you can see below.
private static class InterruptibleRequestExecution implements Callable<HttpResponse> {
private final Request request;
private final Executor executor;
private final RequestDisposer disposer;
public InterruptibleRequestExecution(Request request, Executor executor) {
this.request = request;
this.executor = executor;
this.disposer = request::abort;
}
#Override
public HttpResponse call() {
try {
return executor.execute(request).returnResponse();
} catch (IOException e) {
throw new UsefulExceptionForYourApplication(e);
} finally {
disposer.dispose();
}
}
public void abortRequest() {
disposer.dispose();
}
#FunctionalInterface
interface RequestDisposer {
void dispose();
}
}
The results are really good. We've had times where some connections where hanging in sockedRead0 for 7 hours! Now, it never passes the defined timeout and its working in production with millions of requests per day without having any problems.

Apache HttpClient Interim Error: NoHttpResponseException

I have a webservice which is accepting a POST method with XML. It is working fine then at some random occasion, it fails to communicate to the server throwing IOException with message The target server failed to respond. The subsequent calls work fine.
It happens mostly, when i make some calls and then leave my application idle for like 10-15 min. the first call which I make after that returns this error.
I tried couple of things ...
I setup the retry handler like
HttpRequestRetryHandler retryHandler = new HttpRequestRetryHandler() {
public boolean retryRequest(IOException e, int retryCount, HttpContext httpCtx) {
if (retryCount >= 3){
Logger.warn(CALLER, "Maximum tries reached, exception would be thrown to outer block");
return false;
}
if (e instanceof org.apache.http.NoHttpResponseException){
Logger.warn(CALLER, "No response from server on "+retryCount+" call");
return true;
}
return false;
}
};
httpPost.getParams().setParameter(HttpMethodParams.RETRY_HANDLER, retryHandler);
but this retry never got called. (yes I am using right instanceof clause). While debugging this class never being called.
I even tried setting up HttpProtocolParams.setUseExpectContinue(httpClient.getParams(), false); but no use. Can someone suggest what I can do now?
IMPORTANT
Besides figuring out why I am getting the exception, one of the important concerns I have is why isn't the retryhandler working here?
Most likely persistent connections that are kept alive by the connection manager become stale. That is, the target server shuts down the connection on its end without HttpClient being able to react to that event, while the connection is being idle, thus rendering the connection half-closed or 'stale'. Usually this is not a problem. HttpClient employs several techniques to verify connection validity upon its lease from the pool. Even if the stale connection check is disabled and a stale connection is used to transmit a request message the request execution usually fails in the write operation with SocketException and gets automatically retried. However under some circumstances the write operation can terminate without an exception and the subsequent read operation returns -1 (end of stream). In this case HttpClient has no other choice but to assume the request succeeded but the server failed to respond most likely due to an unexpected error on the server side.
The simplest way to remedy the situation is to evict expired connections and connections that have been idle longer than, say, 1 minute from the pool after a period of inactivity. For details please see the 2.5. Connection eviction policy of the HttpClient 4.5 tutorial.
Accepted answer is right but lacks solution. To avoid this error, you can add setHttpRequestRetryHandler (or setRetryHandler for apache components 4.4) for your HTTP client like in this answer.
HttpClient 4.4 suffered from a bug in this area relating to validating possibly stale connections before returning to the requestor. It didn't validate whether a connection was stale, and this then results in an immediate NoHttpResponseException.
This issue was resolved in HttpClient 4.4.1. See this JIRA and the release notes
Solution: change the ReuseStrategy to never
Since this problem is very complex and there are so many different factors which can fail I was happy to find this solution in another post: How to solve org.apache.http.NoHttpResponseException
Never reuse connections:
configure in org.apache.http.impl.client.AbstractHttpClient:
httpClient.setReuseStrategy(new NoConnectionReuseStrategy());
The same can be configured on a org.apache.http.impl.client.HttpClientBuilder builder:
builder.setConnectionReuseStrategy(new NoConnectionReuseStrategy());
Although accepted answer is right, but IMHO is just a workaround.
To be clear: it's a perfectly normal situation that a persistent connection may become stale. But unfortunately it's very bad when the HTTP client library cannot handle it properly.
Since this faulty behavior in Apache HttpClient was not fixed for many years, I definitely would prefer to switch to a library that can easily recover from a stale connection problem, e.g. OkHttp.
Why?
OkHttp pools http connections by default.
It gracefully recovers from situations when http connection becomes stale and request cannot be retried due to being not idempotent (e.g. POST). I cannot say it about Apache HttpClient (mentioned NoHttpResponseException).
Supports HTTP/2.0 from early drafts and beta versions.
When I switched to OkHttp, my problems with NoHttpResponseException disappeared forever.
Nowadays, most HTTP connections are considered persistent unless declared otherwise. However, to save server ressources the connection is rarely kept open forever, the default connection timeout for many servers is rather short, for example 5 seconds for the Apache httpd 2.2 and above.
The org.apache.http.NoHttpResponseException error comes most likely from one persistent connection that was closed by the server.
It's possible to set the maximum time to keep unused connections open in the Apache Http client pool, in milliseconds.
With Spring Boot, one way to achieve this:
public class RestTemplateCustomizers {
static public class MaxConnectionTimeCustomizer implements RestTemplateCustomizer {
#Override
public void customize(RestTemplate restTemplate) {
HttpClient httpClient = HttpClientBuilder
.create()
.setConnectionTimeToLive(1000, TimeUnit.MILLISECONDS)
.build();
restTemplate.setRequestFactory(
new HttpComponentsClientHttpRequestFactory(httpClient));
}
}
}
// In your service that uses a RestTemplate
public MyRestService(RestTemplateBuilder builder ) {
restTemplate = builder
.customizers(new RestTemplateCustomizers.MaxConnectionTimeCustomizer())
.build();
}
This can happen if disableContentCompression() is set on a pooling manager assigned to your HttpClient, and the target server is trying to use gzip compression.
Same problem for me on apache http client 4.5.5
adding default header
Connection: close
resolve the problem
Use PoolingHttpClientConnectionManager instead of BasicHttpClientConnectionManager
BasicHttpClientConnectionManager will make an effort to reuse the connection for subsequent requests with the same route. It will, however, close the existing connection and re-open it for the given route.
I have faced same issue, I resolved by adding "connection: close" as extention,
Step 1: create a new class ConnectionCloseExtension
import com.github.tomakehurst.wiremock.common.FileSource;
import com.github.tomakehurst.wiremock.extension.Parameters;
import com.github.tomakehurst.wiremock.extension.ResponseTransformer;
import com.github.tomakehurst.wiremock.http.HttpHeader;
import com.github.tomakehurst.wiremock.http.HttpHeaders;
import com.github.tomakehurst.wiremock.http.Request;
import com.github.tomakehurst.wiremock.http.Response;
public class ConnectionCloseExtension extends ResponseTransformer {
#Override
public Response transform(Request request, Response response, FileSource files, Parameters parameters) {
return Response.Builder
.like(response)
.headers(HttpHeaders.copyOf(response.getHeaders())
.plus(new HttpHeader("Connection", "Close")))
.build();
}
#Override
public String getName() {
return "ConnectionCloseExtension";
}
}
Step 2: set extension class in wireMockServer like below,
final WireMockServer wireMockServer = new WireMockServer(options()
.extensions(ConnectionCloseExtension.class)
.port(httpPort));

HttpClient stuck without any exception

I'm developing a long-running application that heavily uses the HttpClient from apache.
On my first test run, the application worked perfectly until it just got stuck. It wasn't stopped, it didn't throw any exception, it just sits there doing nothing.
I did a second run just now and stopped the time and it stopped after approx. 24 hours of contant running. Additionally I noticed that the internet connection of my laptop on which I had it running was terminated at the exact moment the application got stuck. I had to reboot my WLAN adapter in order to the the net running again.
The application though, didn't return to working after the connection was up again. And now, it's stuck again.
Is there any timeout controller I'm not aware of in the HttpClient? Why doesn't my application throw an exception when the connection is down?
The part that uses the client looks as follows;
public HttpUtil(ConfigUtil config) {
this.config = config;
client = new DefaultHttpClient();
client.getParams().setParameter(HttpProtocolParams.USER_AGENT, this.config.getProperty("httputil.userAgent"));
}
public String getContentAsString(String url) throws ParseException, ClientProtocolException, IOException {
return EntityUtils.toString(
client.execute(
new HttpGet(url)).getEntity());
}
The application repeatedly calls httputil.getContentAsString() on the URLs it needs.
This code is now deprecated (get HttpParams, etc). A better way is:
RequestConfig defaultRequestConfig = RequestConfig.custom()
.setCookieSpec(CookieSpecs.BEST_MATCH)
.setExpectContinueEnabled(true)
.setStaleConnectionCheckEnabled(true)
.setTargetPreferredAuthSchemes(Arrays.asList(AuthSchemes.NTLM, AuthSchemes.DIGEST))
.setProxyPreferredAuthSchemes(Arrays.asList(AuthSchemes.BASIC))
.build();
HttpGet httpGet = new HttpGet(url);
RequestConfig requestConfig = RequestConfig.copy(defaultRequestConfig)
.setSocketTimeout(5000)
.setConnectTimeout(5000)
.setConnectionRequestTimeout(5000)
.build();
httpGet.setConfig(requestConfig);
As of version 4.4, both answers by users user2393012 and Stephen C have been deprecated. I'm not sure if there is another way of doing it, but the way I do it is by using a builder paradigm, HTTPClientBuilder.
Ex.
HttpClients.custom().setConnectionTimeToLive(1, TimeUnit.MINUTES).build()
A very similar (it actually might have been OP's problem) problem to what OP mentioned also happens but is due to Apache setting the default concurrent connections to only two connections per client. The solution to this would be to increase the max connections or close them if you can.
To increase the max connections:
HttpClients.custom().setMaxConnPerRoute(100000).build()
To close connections, you can use a BasicHttpClientConnectionManager and call on the close method for
I gave a similar answer in another thread (HttpClient hangs on socketRead0 with successfully executed method)
In my case, I was setting the connectionTimeout and socketTimeout on the request, but not on the connection socket used during the establishment of the SSL connection. As a result, I would sometime hang during the SSL handshake. Below is some code that sets all 3 timeouts using the v4.4 (also tested in v4.5)
// Configure the socket timeout for the connection, incl. ssl tunneling
connManager = new PoolingHttpClientConnectionManager();
connManager.setMaxTotal(200);
connManager.setDefaultMaxPerRoute(100);
SocketConfig sc = SocketConfig.custom()
.setSoTimeout(soTimeoutMs)
.build();
connManager.setDefaultSocketConfig(sc);
HttpClient client = HttpClients.custom()
.setConnectionManager(connManager)
.setConnectionManagerShared(true)
.build();
// configure the timeouts (socket and connection) for the request
RequestConfig.Builder config = = RequestConfig.copy(RequestConfig.DEFAULT);
config.setConnectionRequestTimeout(connectionTimeoutMs);
config.setSocketTimeout(socketTimeoutMs);
HttpRequestBase req = new HttpGet(uri);
req.setConfig(config.build());
client.execute(req);
You haven't said which version of HttpClient you are using, but assuming that it is version 4, this blog article explains what to do.
DefaultHttpClient httpClient = new DefaultHttpClient();
HttpParams params = httpClient.getParams();
HttpConnectionParams.setConnectionTimeout(httpParams, connectionTimeoutMillis);
HttpConnectionParams.setSoTimeout(httpParams, socketTimeoutMillis);
I have all the timeouts setup just fine but I found out we have on url that does http chunking but sends no results(works fine in chrome, but in http client it hangs forever even with the timeout set). Luckily I own the server and just return some garbage and it no longer hangs. This seems like a very unique bug in that http client does not handle some kind of empty chunking case well(though I could be way off)....I just know it hangs every time on that same url with empty data and that url is http chunking csv download back to our http client.
I had the same issue, it stuck cause I didn't close DefaultHttpClient.
So this is wrong:
try{
DefaultHttpClient httpclient = new DefaultHttpClient();
...
} catch (Exception e){
e.PrintStackTrace();
}
And this is right:
try (DefaultHttpClient httpclient = new DefaultHttpClient()){
...
} catch (Exception e){
e.PrintStackTrace();
}
Hope it helps someone.
By default, HttpClient does not timeout (which causes more problem than it helps). What you are describing could be a hardware issue, if your network adapter died, the HttpClient will hang.
Here are the parameters set to HttpParams as part of the constructor to DefaultHttpClient including
http.socket.timeout: defines the socket timeout (SO_TIMEOUT) in
milliseconds, which is the timeout for waiting for data or, put
differently, a maximum period inactivity between two consecutive data
packets). A timeout value of zero is interpreted as an infinite
timeout. This parameter expects a value of type java.lang.Integer. If
this parameter is not set, read operations will not time out (infinite
timeout).
That will set a timeout on the connection, so after the set timeout an exception will be thrown.

Categories