Is this the right way to use AIMDBackoffManager to instantiate HttpClient? - java

Background :
I am using HttpClient (SolrJ) to connect to a Solr service. The question is not directly related to Solr though.
I bumped into the following issue when doing a Load testing.
Caused by: java.lang.IllegalStateException: Invalid use of BasicClientConnManager: connection still allocated.
SOF Answer - to use Pooled connection manager
Invalid use of BasicClientConnManager: connection still allocated
Question :
I am using the PoolingHttpClientConnectionManager as in the following code. Instead of manually throttling the connection size, I would like it to be managed using the AIMDBackoffManager. However, I see that the AIMDBackoffManager needs the connection pool as its parameter.
public static final PoolingClientConnectionManager poolingConnectionManager = new PoolingClientConnectionManager();
public static DefaultHttpClient getHttpClient(){
DefaultHttpClient httpClient = new DefaultHttpClient(poolingConnectionManager);
httpClient.setBackoffManager(new AIMDBackoffManager(poolingConnectionManager));
...
...
}
I googled a fair bit but I am unable to find any examples on the usage of BackoffManager. So, this is what I did but I am not excited in passing the connection manager twice to the DefaultHttpClient. Or should I not be worried considering the first time I am passing it to the HttpClient and the second time I am passing it to the BackoffManager?
I am using httpclient-4.2.3

I ventured into this deep water as well. I have been investigating how to use ServiceUnavailableRetryStrategy which seems failing due to BackoffManager in my case. I have an impression that this is not a finished functionality as I can't google out its usage and there is not much in the HttpClient source code either.

The AIMDBackoffManager constructor takes a ConnPoolControl (which the connection manager implements). Looking at this interface you'll see it only returns route-specific statistics of the pool which is what the BackoffManager uses to perform its tasks.
So you should not be worried about passing the connection manager twice while building the client, just be aware that AIMDBackoffManager acquires a lock on the connection manager in its backOff and probe implementations, which you can see in the source.

Related

How do I make a Java function that retries a URL connection every half second if the connection takes too long?

So I have a problem with a Java program I have. The program's basic functionality includes basically connecting to a web API for data. The function that does that is something like this:
public static Object getData(String sURL) throws IOException {
URL url = new URL(sURL);
URLConnection request = url.openConnection();
request.connect();
return request.getContent();
}
The code works fine as it is, but recently, after my house changed ISPs, I have found that sometimes the connections take an unreasonably long amount of time, something like 10 seconds or more in about 10% of attempts, while the other 90% takes only around 200ms. I have found it to be faster to ask my program to call the function again in a different thread than to wait for some of these connections to finally connect.
Therefore, I want to change the function so that if after 500ms, the connection did not establish, it would disconnect and a new connection would be attempted. How could I do this?
Somewhere online I read that HttpURLConnection might help, but I am not sure how.
URLConnection allows you to specify the connect and read timeout prior to calling connect():
https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/net/URLConnection.html#setConnectTimeout(int)
Sets a specified timeout value, in milliseconds, to be used when
opening a communications link to the resource referenced by this
URLConnection. If the timeout expires before the connection can be
established, a java.net.SocketTimeoutException is raised. A timeout of
zero is interpreted as an infinite timeout.
With 500ms timeout:
try {
URLConnection request = url.openConnection();
request.setConnectTimeout(500); // 500 ms
request.connect();
// on successful connection
} catch (SocketTimeoutException ex) {
// on request timeout
}
This you can pack into a loop, but I recommend limiting the number of attempts made.
Java's URLConnection doesn't have retry capabilities in Java 8 therefore the best way here to achieve this - use an appropriate standalone 3-party library such as Apache HttpClient.
This is by far the best standalone 3-party HTTP client with advanced capabilities as of 2020 and it's still maintained.
By default as of version 5.2.x Apache Http Client, Apache Http Client uses the default implementation of org.apache.http.client.HttpRequestRetryHandler, which retries 3 times, but you can use a custom implementation instead.
The configuration might look like this(full imports are for example's sake):
org.apache.http.client.HttpClient httpClient = org.apache.http.impl.client.HttpClients.custom()
.setRetryHandler(YourCustomImplOfTheRetryHandlerClass)
//other config
.build();
There is no way I can reproduce that problem using my ISP.
I suggest you dig deeper into the problem and find a better solution. Sending another request just doesn't seem good enough to me. Maybe try a different way to get the data and see if that works for you. Can't say for sure as I can't reproduce the problem.

httpclient Connection reset [duplicate]

I'm creating a (well behaved) web spider and I notice that some servers are causing Apache HttpClient to give me a SocketException -- specifically:
java.net.SocketException: Connection reset
The code that causes this is:
// Execute the request
HttpResponse response;
try {
response = httpclient.execute(httpget); //httpclient is of type HttpClient
} catch (NullPointerException e) {
return;//deep down in apache http sometimes throws a null pointer...
}
For most servers it's just fine. But for others, it immediately throws a SocketException.
Example of site that causes immediate SocketException: http://www.bhphotovideo.com/
Works great (as do most websites): http://www.google.com/
Now, as you can see, www.bhphotovideo.com loads fine in a web browser. It also loads fine when I don't use Apache's HTTP Client. (Code like this:)
HttpURLConnection c = (HttpURLConnection)url.openConnection();
BufferedInputStream in = new BufferedInputStream(c.getInputStream());
Reader r = new InputStreamReader(in);
int i;
while ((i = r.read()) != -1) {
source.append((char) i);
}
So, why don't I just use this code instead? Well there are some key features in Apache's HTTP Client that I need to use.
Does anyone know what causes some servers to cause this exception?
Research so far:
Problem occurs on my local Mac dev machines AND an AWS EC2 Instance, so it's not a local firewall.
It seems the error isn't caused by the remote machine because the exception doesn't say "by peer"
This stack overflow seems relavent java.net.SocketException: Connection reset but the answers don't show why this would happen only from Apache HTTP Client and not other approaches.
Bonus question: I'm doing a fair amount of crawling with this system. Is there generally a better Java class for this other than Apache HTTP Client? I've found a number of issues (such as the NullPointerException I have to catch in the code above). It seems that HTTPClient is very picky about server communications -- more picky than I'd like for a crawler that can't just break when a server doesn't behave.
Thanks all!
Solution
Honestly, I don't have a perfect solution, but it works, so that's good enough for me.
As pointed out by oleg below, Bixo has created a crawler that customizes HttpClient to be more forgiving to servers. To "get around" the issue more than fix it, I just used SimpleHttpFetcher provided by Bixo here:
(linked removed - SO thinks I'm a spammer, so you'll have to google it yourself)
SimpleHttpFetcher fetch = new SimpleHttpFetcher(new UserAgent("botname","contact#yourcompany.com","ENTER URL"));
try {
FetchedResult result = fetch.fetch("ENTER URL");
System.out.println(new String(result.getContent()));
} catch (BaseFetchException e) {
e.printStackTrace();
}
The down side to this solution is that there are a lot of dependencies for Bixo -- so this may not be a good work around for everyone. However, you can always just work through their use of DefaultHttpClient and see how they instantiated it to get it to work. I decided to use the whole class because it handles some things for me, like automatic redirect following (and reporting the final destination url) that are helpful.
Thanks for the help all.
Edit: TinyBixo
Hi all. So, I loved how Bixo worked, but didn't like that it had so many dependencies (including all of Hadoop). So, I created a vastly simplified Bixo, without all the dependencies. If you're running into the problems above, I would recommend using it (and feel free to make pull requests if you'd like to update it!)
It's available here: https://github.com/juliuss/TinyBixo
First, to answer your question:
The connection reset was caused by a problem on the server side. Most likely the server failed to parse the request or was unable to process it and dropped the connection as a result without returning a valid response. There is likely something in the HTTP requests generated by HttpClient that causes server side logic to fail, probably due to a server side bug. Just because the error message does not say 'by peer' does not mean the connection reset took place on the client side.
A few remarks:
(1) Several popular web crawlers such as bixo http://openbixo.org/ use HttpClient without major issues but pretty much of them had to tweak HttpClient behavior to make it more lenient about common HTTP protocol violations. Per default HttpClient is rather strict about the HTTP protocol compliance.
(2) Why did not you report the NPE problem or any other problem you have been experiencing to the HttpClient project?
These two settings will sometimes help:
client.getParams().setParameter("http.socket.timeout", new Integer(0));
client.getParams().setParameter("http.connection.stalecheck", new Boolean(true));
The first sets the socket timeout to be infinite.
Try getting a network trace using wireshark, and augment that with log4j logging of the HTTPClient. That should show why the connection is being reset

How to set Sesame 2.8.0 RepositoryConnection timeout

I am trying to implement something like circuit breaker for my Sesame connections to the back-end database. When the database is absent I want to know this after 2 seconds, not to rely on the defaults of the client for timeouts. I can possibly overcome this with my own FutureTasks where I will execute the repository initialization and the connection obtaining. However in the logs I can see that sesame client uses o.a.h.i.c.PoolingClientConnectionManager - which is passed I bet ExecutorService and some default timeouts. This will make my FutureTask solution pretty messy. Is there an easier way to set timeouts for the sesame client.
You can set the query and update timeout, specifically, on the query/update object itself:
RepositoryConnection conn = ....;
...
TupleQuery query = conn.prepareTupleQuery(QueryLangage.SPARQL, "SELECT ...");
query.setMaxExecutionTime(2);
However, if you want to set a general timeout for all api calls over HTTP, the only way to currently do that is by obtaining a reference to the HttpClient object, and reconfigure it:
HTTPRepository repo = ....;
AbstractHttpClient httpClient = (AbstractHttpClient)((SesameClientImpl)repo.getSesameClient()).getHtttpClient();
HttpParams params = httpClient.getParams();
params.setIntParameter(CoreConnectionPNames.SO_TIMEOUT, 2000);
httpClient.setParams(params);
As you can see, this is rather brittle (lots of explicit casts), and uses an approach that is deprecated in Apache HttpClient 4.4. So I don't exactly recommend this as a stable solution, but it should provide a workaround in the short term.
In the longer term the Sesame dev team are working on more convenient access to the configuration of the httpclient.

Apache HttpClient Interim Error: NoHttpResponseException

I have a webservice which is accepting a POST method with XML. It is working fine then at some random occasion, it fails to communicate to the server throwing IOException with message The target server failed to respond. The subsequent calls work fine.
It happens mostly, when i make some calls and then leave my application idle for like 10-15 min. the first call which I make after that returns this error.
I tried couple of things ...
I setup the retry handler like
HttpRequestRetryHandler retryHandler = new HttpRequestRetryHandler() {
public boolean retryRequest(IOException e, int retryCount, HttpContext httpCtx) {
if (retryCount >= 3){
Logger.warn(CALLER, "Maximum tries reached, exception would be thrown to outer block");
return false;
}
if (e instanceof org.apache.http.NoHttpResponseException){
Logger.warn(CALLER, "No response from server on "+retryCount+" call");
return true;
}
return false;
}
};
httpPost.getParams().setParameter(HttpMethodParams.RETRY_HANDLER, retryHandler);
but this retry never got called. (yes I am using right instanceof clause). While debugging this class never being called.
I even tried setting up HttpProtocolParams.setUseExpectContinue(httpClient.getParams(), false); but no use. Can someone suggest what I can do now?
IMPORTANT
Besides figuring out why I am getting the exception, one of the important concerns I have is why isn't the retryhandler working here?
Most likely persistent connections that are kept alive by the connection manager become stale. That is, the target server shuts down the connection on its end without HttpClient being able to react to that event, while the connection is being idle, thus rendering the connection half-closed or 'stale'. Usually this is not a problem. HttpClient employs several techniques to verify connection validity upon its lease from the pool. Even if the stale connection check is disabled and a stale connection is used to transmit a request message the request execution usually fails in the write operation with SocketException and gets automatically retried. However under some circumstances the write operation can terminate without an exception and the subsequent read operation returns -1 (end of stream). In this case HttpClient has no other choice but to assume the request succeeded but the server failed to respond most likely due to an unexpected error on the server side.
The simplest way to remedy the situation is to evict expired connections and connections that have been idle longer than, say, 1 minute from the pool after a period of inactivity. For details please see the 2.5. Connection eviction policy of the HttpClient 4.5 tutorial.
Accepted answer is right but lacks solution. To avoid this error, you can add setHttpRequestRetryHandler (or setRetryHandler for apache components 4.4) for your HTTP client like in this answer.
HttpClient 4.4 suffered from a bug in this area relating to validating possibly stale connections before returning to the requestor. It didn't validate whether a connection was stale, and this then results in an immediate NoHttpResponseException.
This issue was resolved in HttpClient 4.4.1. See this JIRA and the release notes
Solution: change the ReuseStrategy to never
Since this problem is very complex and there are so many different factors which can fail I was happy to find this solution in another post: How to solve org.apache.http.NoHttpResponseException
Never reuse connections:
configure in org.apache.http.impl.client.AbstractHttpClient:
httpClient.setReuseStrategy(new NoConnectionReuseStrategy());
The same can be configured on a org.apache.http.impl.client.HttpClientBuilder builder:
builder.setConnectionReuseStrategy(new NoConnectionReuseStrategy());
Although accepted answer is right, but IMHO is just a workaround.
To be clear: it's a perfectly normal situation that a persistent connection may become stale. But unfortunately it's very bad when the HTTP client library cannot handle it properly.
Since this faulty behavior in Apache HttpClient was not fixed for many years, I definitely would prefer to switch to a library that can easily recover from a stale connection problem, e.g. OkHttp.
Why?
OkHttp pools http connections by default.
It gracefully recovers from situations when http connection becomes stale and request cannot be retried due to being not idempotent (e.g. POST). I cannot say it about Apache HttpClient (mentioned NoHttpResponseException).
Supports HTTP/2.0 from early drafts and beta versions.
When I switched to OkHttp, my problems with NoHttpResponseException disappeared forever.
Nowadays, most HTTP connections are considered persistent unless declared otherwise. However, to save server ressources the connection is rarely kept open forever, the default connection timeout for many servers is rather short, for example 5 seconds for the Apache httpd 2.2 and above.
The org.apache.http.NoHttpResponseException error comes most likely from one persistent connection that was closed by the server.
It's possible to set the maximum time to keep unused connections open in the Apache Http client pool, in milliseconds.
With Spring Boot, one way to achieve this:
public class RestTemplateCustomizers {
static public class MaxConnectionTimeCustomizer implements RestTemplateCustomizer {
#Override
public void customize(RestTemplate restTemplate) {
HttpClient httpClient = HttpClientBuilder
.create()
.setConnectionTimeToLive(1000, TimeUnit.MILLISECONDS)
.build();
restTemplate.setRequestFactory(
new HttpComponentsClientHttpRequestFactory(httpClient));
}
}
}
// In your service that uses a RestTemplate
public MyRestService(RestTemplateBuilder builder ) {
restTemplate = builder
.customizers(new RestTemplateCustomizers.MaxConnectionTimeCustomizer())
.build();
}
This can happen if disableContentCompression() is set on a pooling manager assigned to your HttpClient, and the target server is trying to use gzip compression.
Same problem for me on apache http client 4.5.5
adding default header
Connection: close
resolve the problem
Use PoolingHttpClientConnectionManager instead of BasicHttpClientConnectionManager
BasicHttpClientConnectionManager will make an effort to reuse the connection for subsequent requests with the same route. It will, however, close the existing connection and re-open it for the given route.
I have faced same issue, I resolved by adding "connection: close" as extention,
Step 1: create a new class ConnectionCloseExtension
import com.github.tomakehurst.wiremock.common.FileSource;
import com.github.tomakehurst.wiremock.extension.Parameters;
import com.github.tomakehurst.wiremock.extension.ResponseTransformer;
import com.github.tomakehurst.wiremock.http.HttpHeader;
import com.github.tomakehurst.wiremock.http.HttpHeaders;
import com.github.tomakehurst.wiremock.http.Request;
import com.github.tomakehurst.wiremock.http.Response;
public class ConnectionCloseExtension extends ResponseTransformer {
#Override
public Response transform(Request request, Response response, FileSource files, Parameters parameters) {
return Response.Builder
.like(response)
.headers(HttpHeaders.copyOf(response.getHeaders())
.plus(new HttpHeader("Connection", "Close")))
.build();
}
#Override
public String getName() {
return "ConnectionCloseExtension";
}
}
Step 2: set extension class in wireMockServer like below,
final WireMockServer wireMockServer = new WireMockServer(options()
.extensions(ConnectionCloseExtension.class)
.port(httpPort));

Apache HTTPClient throws java.net.SocketException: Connection reset for many domains

I'm creating a (well behaved) web spider and I notice that some servers are causing Apache HttpClient to give me a SocketException -- specifically:
java.net.SocketException: Connection reset
The code that causes this is:
// Execute the request
HttpResponse response;
try {
response = httpclient.execute(httpget); //httpclient is of type HttpClient
} catch (NullPointerException e) {
return;//deep down in apache http sometimes throws a null pointer...
}
For most servers it's just fine. But for others, it immediately throws a SocketException.
Example of site that causes immediate SocketException: http://www.bhphotovideo.com/
Works great (as do most websites): http://www.google.com/
Now, as you can see, www.bhphotovideo.com loads fine in a web browser. It also loads fine when I don't use Apache's HTTP Client. (Code like this:)
HttpURLConnection c = (HttpURLConnection)url.openConnection();
BufferedInputStream in = new BufferedInputStream(c.getInputStream());
Reader r = new InputStreamReader(in);
int i;
while ((i = r.read()) != -1) {
source.append((char) i);
}
So, why don't I just use this code instead? Well there are some key features in Apache's HTTP Client that I need to use.
Does anyone know what causes some servers to cause this exception?
Research so far:
Problem occurs on my local Mac dev machines AND an AWS EC2 Instance, so it's not a local firewall.
It seems the error isn't caused by the remote machine because the exception doesn't say "by peer"
This stack overflow seems relavent java.net.SocketException: Connection reset but the answers don't show why this would happen only from Apache HTTP Client and not other approaches.
Bonus question: I'm doing a fair amount of crawling with this system. Is there generally a better Java class for this other than Apache HTTP Client? I've found a number of issues (such as the NullPointerException I have to catch in the code above). It seems that HTTPClient is very picky about server communications -- more picky than I'd like for a crawler that can't just break when a server doesn't behave.
Thanks all!
Solution
Honestly, I don't have a perfect solution, but it works, so that's good enough for me.
As pointed out by oleg below, Bixo has created a crawler that customizes HttpClient to be more forgiving to servers. To "get around" the issue more than fix it, I just used SimpleHttpFetcher provided by Bixo here:
(linked removed - SO thinks I'm a spammer, so you'll have to google it yourself)
SimpleHttpFetcher fetch = new SimpleHttpFetcher(new UserAgent("botname","contact#yourcompany.com","ENTER URL"));
try {
FetchedResult result = fetch.fetch("ENTER URL");
System.out.println(new String(result.getContent()));
} catch (BaseFetchException e) {
e.printStackTrace();
}
The down side to this solution is that there are a lot of dependencies for Bixo -- so this may not be a good work around for everyone. However, you can always just work through their use of DefaultHttpClient and see how they instantiated it to get it to work. I decided to use the whole class because it handles some things for me, like automatic redirect following (and reporting the final destination url) that are helpful.
Thanks for the help all.
Edit: TinyBixo
Hi all. So, I loved how Bixo worked, but didn't like that it had so many dependencies (including all of Hadoop). So, I created a vastly simplified Bixo, without all the dependencies. If you're running into the problems above, I would recommend using it (and feel free to make pull requests if you'd like to update it!)
It's available here: https://github.com/juliuss/TinyBixo
First, to answer your question:
The connection reset was caused by a problem on the server side. Most likely the server failed to parse the request or was unable to process it and dropped the connection as a result without returning a valid response. There is likely something in the HTTP requests generated by HttpClient that causes server side logic to fail, probably due to a server side bug. Just because the error message does not say 'by peer' does not mean the connection reset took place on the client side.
A few remarks:
(1) Several popular web crawlers such as bixo http://openbixo.org/ use HttpClient without major issues but pretty much of them had to tweak HttpClient behavior to make it more lenient about common HTTP protocol violations. Per default HttpClient is rather strict about the HTTP protocol compliance.
(2) Why did not you report the NPE problem or any other problem you have been experiencing to the HttpClient project?
These two settings will sometimes help:
client.getParams().setParameter("http.socket.timeout", new Integer(0));
client.getParams().setParameter("http.connection.stalecheck", new Boolean(true));
The first sets the socket timeout to be infinite.
Try getting a network trace using wireshark, and augment that with log4j logging of the HTTPClient. That should show why the connection is being reset

Categories