I'm working with a slow webservice (about 4 minutes each request) and I need to do about 100 requests in two hours, so I've decided to use multiple threads. The problem is that I can only have 2 threads, as the stub rejects all the other ones. Here I've found an explanation and possible solution:
I had the same problem. It seems that
the source of it is
defaultMaxConnectionsPerHost value in
MultiThreadedHttpConnectionManager
equals 2. Workaround for me was to
create own instance of
MultiThreadedHttpConnectionManager and
use it in service stub, something like
in example below
I've done as the author said, and passed a HttpClient to the stub with higher setMaxTotalConnections and setDefaultMaxConnectionsPerHost values, but the problem is that now the application freezes (well, it does not really freezes, but It does nothing).
Thats my code:
public ReportsStub createReportsStub(String url, HttpTransportProperties.Authenticator auth){
ReportsStub stub = null;
HttpClient httpClient = null;
try {
stub = new ReportsStub(url);
httpClient = createHttpClient(10,5);
stub._getServiceClient().getOptions().setTimeOutInMilliSeconds(10000000);
stub._getServiceClient().getOptions().setProperty(org.apache.axis2.transport.http.HTTPConstants.AUTHENTICATE, auth);
stub._getServiceClient().getOptions().setProperty(org.apache.axis2.transport.http.HTTPConstants.CHUNKED, false);
stub._getServiceClient().getServiceContext().getConfigurationContext().setProperty(HTTPConstants.CACHED_HTTP_CLIENT, httpClient);
return stub;
} catch (AxisFault e) {
e.printStackTrace();
}
return stub;
}
protected HttpClient createHttpClient(int maxTotal, int maxPerHost) {
MultiThreadedHttpConnectionManager httpConnectionManager = new MultiThreadedHttpConnectionManager();
HttpConnectionManagerParams params = httpConnectionManager.getParams();
if (params == null) {
params = new HttpConnectionManagerParams();
httpConnectionManager.setParams(params);
}
params.setMaxTotalConnections(maxTotal);
params.setDefaultMaxConnectionsPerHost(maxPerHost);
HttpClient httpClient = new HttpClient(httpConnectionManager);
return httpClient;
}
Then I pass that stub and the request to each one of threads and run them. If I don't set the HttpClient and use the default, only two threads execute, and if I set it, the application does not work. Any idea?
If anyone wants to create a dynamic REST client in WSO2 Axis2, the following code worked for me...
// Set the max connections to 20 and the timeout to 20 seconds
MultiThreadedHttpConnectionManager multiThreadedHttpConnectionManager = new MultiThreadedHttpConnectionManager();
HttpConnectionManagerParams params = new HttpConnectionManagerParams();
params.setDefaultMaxConnectionsPerHost(20);
params.setMaxTotalConnections(20);
params.setSoTimeout(20000);
params.setConnectionTimeout(20000);
multiThreadedHttpConnectionManager.setParams(params);
HttpClient httpClient = new HttpClient(multiThreadedHttpConnectionManager);
// Create the service client
ServiceClient serviceClient = new ServiceClient();
Options options = new Options();
options.setTo(new EndpointReference(endpoint));
options.setProperty(Constants.Configuration.ENABLE_REST, Constants.VALUE_TRUE);
options.setProperty(Constants.Configuration.HTTP_METHOD, Constants.Configuration.HTTP_METHOD_POST);
serviceClient.getServiceContext().getConfigurationContext().setProperty(HTTPConstants.CACHED_HTTP_CLIENT, httpClient);
serviceClient.setOptions(options);
// Blocking call
OMElement result = serviceClient.sendReceive(ClientUtils.getRestPayload()); // just a dummy payload <root></root>
// Cleanup Transport after each call, this is needed to otherwise the HTTP gets blocked
serviceClient.cleanupTransport();
I put the Max Connections to 20 and the Timeout to 20 seconds.
Also my 'endpoint' contains all the REST arguments, I'm just using a dummy payload "<root></root>" in the serviceClient.sendReceive() method.
I noticed this in a corporate web application that called a back-end service that could take a long period to respond. The web application would lock up because a limit of 2 connections to a single host would take hold.
You call httpConnectionManager.setParams( params ) before you call params.setDefaultMaxConnectionsPerHost(). Have you tried calling these functions in the opposite order to confirm that application of params doesn't take place within the httpConnectionManager.setParams function itself?
Related
We have a generic application which delivers message to different POST endpoints. And we are using
CloseableHttpAsyncClient for this purpose. Its been built/initialized as follows,
private static CloseableHttpAsyncClient get() {
CloseableHttpAsyncClient lInstance;
IOReactorConfig ioReactorConfig = IOReactorConfig.custom()
.setIoThreadCount(100)
.setConnectTimeout(10000)
.setSoTimeout(10000).build();
ConnectingIOReactor ioReactor = null;
try {
ioReactor = new DefaultConnectingIOReactor(ioReactorConfig);
} catch (IOReactorException e) {
logger_.logIfEnabled(Level.ERROR, e);
}
PoolingNHttpClientConnectionManager connManager = new PoolingNHttpClientConnectionManager(ioReactor);
connManager.setDefaultMaxPerRoute(50);
connManager.setMaxTotal(5000);
connManager.closeIdleConnections(10000, TimeUnit.MILLISECONDS);
baseRequestConfig = RequestConfig.custom().setConnectTimeout(10000)
.setConnectionRequestTimeout(10000)
.setSocketTimeout(10000).build();
lInstance = HttpAsyncClients.custom().setDefaultRequestConfig(baseRequestConfig)
.setConnectionManager(connManager).build();
lInstance.start();
return lInstance;
}
This is prebuilt and initialized. As an when a new request arrives to our application, based on message, authentication type, a new postRequest is built httpPost = new HttpPost(builder.build());
After setting the required header, payload etc. exiting httpClient is used to send the request.
httpClient.execute(httpPost, httpContext, null);
Now, the question is based on the our new requirement to support client certificate based authentication. And since our current approach is to create httpClient in the beginning, the question is how to change the behaviour of httpClient to send client certificate to some endpoints and work as it is for other endpoints which doesn't require certificate to be send?
I know I can introduce SSLContext to CloseableHttpAsyncClient while creating, but at the time of creating I don't have any information that we have any endpoint which requires certificate based authentication. And we can have many endpoints which would be supporting client certificate and that would be known at runtime.
I am trying to do GET on thousands of URL as quickly as I can. Using the Apache HTTPClient v4.x I can do this but usually end up with around 3-5% of the requests ending up with either host look up failed (<10% of the errors) and the rest either timing out (<10%) or network read errors.
So basically my loop iterates over the URLs and submit the working threads to the Executor service. Below are the code snippets for the important pieces:
Executor
--------
public static ExecutorService pool = Executors.newFixedThreadPool(400);
AppConfig.monitors.forEach((key, monitor) -> {
results.add(AppConfig.pool.submit(new WebRequest(monitor)));
}
});
Notes:
I have tried all ranges between 50-1000 for thread count.
I submit it to a Callable thread which returns a future and I iterater over the results in a subsequent loop.
CLIENT Code
-----------
cm = new PoolingHttpClientConnectionManager();
cm.setDefaultMaxPerRoute(2);
cm.setMaxTotal(1000);
RequestConfig rc = RequestConfig.custom()
.setSocketTimeout(4000)
.setConnectTimeout(4000)
.setCookieSpec(CookieSpecs.IGNORE_COOKIES)
.setMaxRedirects(1)
.setRedirectsEnabled(true)
.setCircularRedirectsAllowed(false)
.build();
sslContext = SSLContextBuilder
.create()
.loadTrustMaterial(TrustSelfSignedStrategy.INSTANCE)
.build();
sslcsf = new SSLConnectionSocketFactory(sslContext, new NoopHostnameVerifier());
client = HttpClients.custom()
.setConnectionManager(cm)
.setDefaultHeaders(headers)
.setDefaultRequestConfig(rc)
.setSSLSocketFactory(sslcsf)
.disableAutomaticRetries()
.setRedirectStrategy(DefaultRedirectStrategy.INSTANCE)
Notes:
All URLs are final and there are neither regular or circular redirects.
Also note that I have run these URLs through the HTTPClient one at a time and they work without issue. So in general timeouts should not occur.
All of the domain are accessible.
REQUEST Code
------------
public class WebRequest implements Callable<Monitor> {
#Override
public Monitor call() throws Exception {
HttpRequestBase request;
HttpContext context = HttpClientContext.create();
request = new HttpGet(monitor.getUrl());
try (CloseableHttpResponse response = WebClient.client.execute(request, context);) {
request.releaseConnection();
}
}
}
Please let me know if you need additional information.
I'm currently trying to build an OSGi service that sends a POST request to a defined API. This API is used to virus-scan a file which is contained in the request body (JSON) as Base64 string.
For this, I am using Apache HttpClient contained in Adobe AEM uberjar v6.4.0
My current implementation works fine for smaller files (<2 MB), but as filesize gets bigger, the behaviour gets strange:
When I upload a 9 MB file, the request executes for ~1 minute, then gets a HTTP400 as response and afterwards retrys the request 7 times.
I tried to use a timeout with the request. If the timeout is below 60.000ms, a TimeoutException is thrown, if it's greater than 60.000ms, I get a HTTP400 Bad Request. I guess the latter is the APIs fault which I need to clarify.
However, in both cases after the exception is thrown, httpClient retries the request and I have not been able to prevent that since. I'm struggeling with many deprecated "HowTo's" on the web and now I'm here.
I have shortened the code a bit, as it's somehow big (mostly removing debug messages and some "if... return false" at the beginning). My Code:
public boolean isAttachmentClean(InputStream inputStream) throws IOException, JSONException, ServiceUnavailableException {
//prevent httpClient from retrying in case of an IOException
final HttpRequestRetryHandler retryHandler = new DefaultHttpRequestRetryHandler(0, false);
HttpClient httpClient = HttpClients.custom().setRetryHandler(retryHandler).build();
HttpPost httpPost = new HttpPost(serviceUrl);
httpPost.setHeader("accept", "application/json");
//set some more headers...
//set timeout for POST from OSGi Config
RequestConfig timeoutConfig = RequestConfig.custom()
.setConnectionRequestTimeout(serviceTimeout)
.setConnectTimeout(serviceTimeout)
.setSocketTimeout(serviceTimeout)
.build();
httpPost.setConfig(timeoutConfig);
//create request body data
String requestBody;
try {
requestBody = buildDataJson(inputStream);
} finally {
inputStream.close();
}
HttpEntity requestBodyEntity = new ByteArrayEntity(requestBody.getBytes("UTF-8"));
httpPost.setEntity(requestBodyEntity);
//Execute and get the response.
HttpResponse response = httpClient.execute(httpPost);
if (response.getStatusLine().getStatusCode() != HttpServletResponse.SC_OK){
httpPost.abort();
throw new ServiceUnavailableException("API not available, Response Code was "+ response.getStatusLine().getStatusCode());
}
HttpEntity entity = response.getEntity();
boolean result = false;
if (entity != null) {
InputStream apiResult = entity.getContent();
try {
// check the response from the API (Virus yes or no)
result = evaluateResponse(apiResult);
} finally {
apiResult.close();
}
}
return result;
}
"buildDataJson()" simply reads the InputStream and creates a JSON needed for the API call.
"evaluateResponse()" also reads the InputStream, transforms it into a JSON and checks for a property named "Status:" "Clean".
I'd appreciate any tipps on why this request is retried over and over again.
/edit: So far, I found that Apache httpClient has some default mechanism that retries a request in case of an IOException - which is what I get here. Still, I have not found a solution on how to deactivate these retries.
I need to set time out for the Http Request we make to a service (not a web service). We are using Apache HTTP Client. I have added these 2 lines of code to set the time out on request and response to the service.
HttpConnectionParams.setConnectionTimeout(params, 10000);
HttpConnectionParams.setSoTimeout(params, 10000);
1) Currently I have set 10 seconds as the timeout since I see the response coming from the service almost instantaneously. Should I increase or decrease the timing?
2) What will happen when response is takes more than 10 seconds? Will it throw exception and what exception will it be? Is there any thing else I need to add to set the time out in the below code.
public HashMap<String, Object> getJSONData(String url) throw Exception{
DefaultHttpClient httpClient = new DefaultHttpClient();
HttpParams params = httpClient.getParams();
HttpConnectionParams.setConnectionTimeout(params, 10000);
HttpConnectionParams.setSoTimeout(params, 10000);
HttpHost proxy = new HttpHost(getProxy(), getProxyPort());
ConnRouteParams.setDefaultProxy(params, proxy);
URI uri;
InputStream data = null;
try {
uri = new URI(url);
HttpGet method = new HttpGet(uri);
HttpResponse response = httpClient.execute(method);
data = response.getEntity().getContent();
}
catch (Exception e) {
e.printStackTrace();
}
Reader r = new InputStreamReader(data);
HashMap<String, Object> jsonObj = (HashMap<String, Object>) GenericJSONUtil.fromJson(r);
return jsonObj;
}
I am guessing many people come here because of the title and because the HttpConnectionParams API is deprecated.
Using a recent version of Apache HTTP Client, you can set these timeouts using the request params:
HttpPost request = new HttpPost(url);
RequestConfig requestConfig = RequestConfig.custom()
.setSocketTimeout(TIMEOUT_MILLIS)
.setConnectTimeout(TIMEOUT_MILLIS)
.setConnectionRequestTimeout(TIMEOUT_MILLIS)
.build();
request.setConfig(requestConfig);
Alternatively, you can also set this when you create your HTTP Client, using the builder API for the HTTP client, but you'll also need to build a custom connection manager with a custom socket config.
The configuration example file is an excellent resource to find out about how to configure Apache HTTP Client.
The exceptions you'll see will be ConnectTimeoutException and SocketTimeoutException. The actual timeout values you use should be the maximum time your application is willing to wait. One important note about the read timeout is that it corresponds to the timeout on a socket read. So it's not the time allowed for the full response to arrive, but rather the time given to a single socket read. So if there are 4 socket reads, each taking 9 seconds, your total read time is 9 * 4 = 36 seconds.
If you want to specify a total time for the response to arrive (including connect and total read time), you can wrap the call in a thread and use a thread timeout for that. For example, I usually do something like this:
Future<T> future = null;
future = pool.submit(new Callable<T>() {
public T call() {
return executeImpl(url);
}
});
try {
return future.get(10, TimeUnit.SECONDS);
}
catch (InterruptedException e) {
log.warn("task interrupted", name);
}
catch (ExecutionException e) {
log.error(name + " execution exception", e);
}
catch (TimeoutException e) {
log.debug("future timed out", name);
}
Some assumptions made in the code above are: 1) this is in a function with a url parameter, 2) it's in a class with a name variable, 3) log is a log4j instance, and 4) pool is a some thread pool executor. Note that even if you use a thread timeout, you should also specify a connect and socket timeout on the HttpClient, so that slow requests don't eat up the resources in the thread pool. Also note that I use a thread pool because typically I use this in a web service so the thread pool is shared across a bunch of tomcat threads. You're environment may be different, and you may prefer to simply spawn a new thread for each call.
Also, I've usually see the timeouts set via member functions of the params, like this:
params.setConnectionTimeout(10000);
params.setSoTimeout(10000);
But perhaps your syntax works as well (not sure).
I'm using Java's HttpUrlConnection to hit foo.com
foo.com has multiple A-Records that point to different IP addresses (1.1.1.1 and 1.1.1.2)
If my first connect call resolves to 1.1.1.1, but then that machine goes down, will a subsequent connect call recognize this and try to connect on 1.1.1.2 instead?
Or do I need to implement this sort of logic myself using the INetAddress api?
I was able to resolve this by using Apache Commons HttpClient, see the code snippet below.
Like I feared, the URLConnection provided by java.net is a very simplistic implementation and will only try the first IP address from the resolved list. If you really are not allowed to use another library, you will have to write your own error handling. It's kinda messy, since you will need to resolve all IPs before hand using InetAddress, and connect to each IP passing the "Host: domain.name" header to the HTTP stack yourself until one of the IPs responds.
The Apache library is greatly more robust and allows for a great deal of customization. You can control how many times it will retry and, most importantly, it will automatically try all IP addresses resolved to the same name until one of them responds successfully.
HttpRequestRetryHandler myRetryHandler = new HttpRequestRetryHandler() {
#Override
public boolean retryRequest(IOException exception, int count, HttpContext context) {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
}
return count < 30;
}
};
ConnectionKeepAliveStrategy keepAlive = new ConnectionKeepAliveStrategy() {
#Override
public long getKeepAliveDuration(HttpResponse response, HttpContext context) {
return 500;
}
};
DefaultHttpClient httpclient = new DefaultHttpClient();
httpclient.getParams().setParameter("http.socket.timeout", new Integer(2000));
httpclient.getParams().setParameter("http.connection.timeout", new Integer(2000));
httpclient.setHttpRequestRetryHandler(myRetryHandler);
httpclient.setKeepAliveStrategy(keepAlive);
HttpGet httpget = new HttpGet("http://remotehost.com");
HttpResponse httpres = httpclient.execute(httpget);
InputStream is = httpres.getEntity().getContent();
I hope this helps!