CloseableHttpResponse response = null;
try {
// do some thing ....
HttpPost request = new HttpPost("some url");
response = getHttpClient().execute(request);
// do some other thing ....
} catch(Exception e) {
// deal with exception
} finally {
if(response != null) {
try {
response.close(); // (1)
} catch(Exception e) {}
request.releaseConnection(); // (2)
}
}
I've made a http request like above.
In order to release the underlying connection, is it correct to call (1) and (2)? and what's the difference between the two invocation?
Short answer:
request.releaseConnection() is releasing the underlying HTTP connection to allow it to be reused. response.close() is closing a stream (not a connection), this stream is the response content we are streaming from the network socket.
Long Answer:
The correct pattern to follow in any recent version > 4.2 and probably even before that, is not to use releaseConnection.
request.releaseConnection() releases the underlying httpConnection so the request can be reused, however the Java doc says:
A convenience method to simplify migration from HttpClient 3.1 API...
Instead of releasing the connection, we ensure the response content is fully consumed which in turn ensures the connection is released and ready for reuse. A short example is shown below:
CloseableHttpClient httpclient = HttpClients.createDefault();
HttpGet httpGet = new HttpGet("http://targethost/homepage");
CloseableHttpResponse response1 = httpclient.execute(httpGet);
try {
System.out.println(response1.getStatusLine());
HttpEntity entity1 = response1.getEntity();
// do something useful with the response body
String bodyAsString = EntityUtils.toString(exportResponse.getEntity());
System.out.println(bodyAsString);
// and ensure it is fully consumed (this is how stream is released.
EntityUtils.consume(entity1);
} finally {
response1.close();
}
CloseableHttpResponse.close() closes the tcp socket
HttpPost.releaseConnection() closes the tcp socket
EntityUtils.consume(response.getEntity()) allows you to re-use the tcp socket
Details
CloseableHttpResponse.close() closes the tcp socket, preventing the connection from being re-used. You need to establish a new tcp connection in order to initiate another request.
This is the call chain that lead me to the above conclusion:
HttpResponseProxy.close()
-> ConnectionHolder.close()
-> ConnectionHolder.releaseConnection(reusable=false)
-> managedConn.close()
-> BHttpConnectionBase.close()
-> Socket.close()
HttpPost.releaseConnection() also closes the Socket. This is the call chain that lead me to the above conclusion:
HttpPost.releaseConnection()
HttpRequestBase.releaseConnect()
AbstractExecutionAwareRequest.reset()
ConnectionHolder.cancel() (
ConnectionHolder.abortConnection()
HttpConnection.shutdown()
Here is experimental code that also demonstrates the above three facts:
import java.lang.reflect.Constructor;
import java.net.Socket;
import java.net.SocketImpl;
import java.net.SocketImplFactory;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClientBuilder;
import org.apache.http.util.EntityUtils;
public class Main {
private static SocketImpl newSocketImpl() {
try {
Class<?> defaultSocketImpl = Class.forName("java.net.SocksSocketImpl");
Constructor<?> constructor = defaultSocketImpl.getDeclaredConstructor();
constructor.setAccessible(true);
return (SocketImpl) constructor.newInstance();
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public static void main(String[] args) throws Exception {
// this is a hack that lets me listen to Tcp socket creation
final List<SocketImpl> allSockets = Collections.synchronizedList(new ArrayList<>());
Socket.setSocketImplFactory(new SocketImplFactory() {
public SocketImpl createSocketImpl() {
SocketImpl socket = newSocketImpl();
allSockets.add(socket);
return socket;
}
});
System.out.println("num of sockets after start: " + allSockets.size());
CloseableHttpClient client = HttpClientBuilder.create().build();
System.out.println("num of sockets after client created: " + allSockets.size());
HttpGet request = new HttpGet("http://www.google.com");
System.out.println("num of sockets after get created: " + allSockets.size());
CloseableHttpResponse response = client.execute(request);
System.out.println("num of sockets after get executed: " + allSockets.size());
response.close();
System.out.println("num of sockets after response closed: " + allSockets.size());
response = client.execute(request);
System.out.println("num of sockets after request executed again: " + allSockets.size());
request.releaseConnection();
System.out.println("num of sockets after release connection: " + allSockets.size());
response = client.execute(request);
System.out.println("num of sockets after request executed again for 3rd time: " + allSockets.size());
EntityUtils.consume(response.getEntity());
System.out.println("num of sockets after entityConsumed: " + allSockets.size());
response = client.execute(request);
System.out.println("num of sockets after request executed again for 4th time: " + allSockets.size());
}
}
pom.xml
<project>
<modelVersion>4.0.0</modelVersion>
<groupId>org.joseph</groupId>
<artifactId>close.vs.release.conn</artifactId>
<version>1.0.0</version>
<properties>
<maven.compiler.target>1.8</maven.compiler.target>
<maven.compiler.source>1.8</maven.compiler.source>
</properties>
<build>
<plugins>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.5.13</version>
</dependency>
</dependencies>
</project>
Output:
num of sockets after start: 0
num of sockets after client created: 0
num of sockets after get created: 0
num of sockets after get executed: 1
num of sockets after response closed: 1
num of sockets after request executed again: 2
num of sockets after release connection: 2
num of sockets after request executed again for 3rd time: 3
num of sockets after entityConsumed: 3
num of sockets after request executed again for 4th time: 3
Notice that both .close() and .releaseConnection() both result in a new tcp connection. Only consuming the entity allows you to re-use the tcp connection.
If you want the connect to be re-usable after each request, then you need to do what #Matt recommended and consume the entity.
Related
i have an api which receives data from a source and send them to telegram bot.
i receive data in bulk from my source and will send them to telegram bot with that rate but telegram can handle only 1 message per sec so eventually it returning this exception
java.io.IOException: Server returned HTTP response code: 429 for URL:....
is there a way to store messages in list and iterate this list from a thread
am trying to learn java so please don't mind if my code not good.
Sample.java
class Sample{
run(){
while(true){
//some operations
SendMessage.getInstance().sendToTelegram(clientCommand);
//
}
}
}
SendMessage.java
import java.io.BufferedInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.net.URL;
import java.net.URLConnection;
public class SendMessage {
static SendMessage getInstance() {
return instance;
}
public void sendToTelegram(String message) {
String urlString = "https://api.telegram.org/;
String apiToken = obj.getInstance().getTelegramToken();
String chatId = obj.getInstance().getChatId();
String text = message;
urlString = urlString+"/bot"+apiToken+"/sendMessage?parse_mode=HTML&chat_id="+chatId+"&text="+msgToSend;
try {
URL url = new URL(urlString);
URLConnection conn = url.openConnection();
InputStream is = new BufferedInputStream(conn.getInputStream());
BufferedReader br = new BufferedReader(new InputStreamReader(is));
String inputLine = "";
StringBuilder sb = new StringBuilder();
while ((inputLine = br.readLine()) != null) {
sb.append(inputLine);
sb.append('\r');
}
br.close();
} catch (IOException e) {
log.error(e);
}
}
}
if thread concept works can anyone please help me how to add into a list and send them to telegram bot without loosing data
by using sleeping thread am not getting 429 too many responses exception
class Sample{
run(){
while(true){
//some operations
SendMessage.getInstance().sendToTelegram(clientCommand);
Thread.sleep(2000);
}
}
}
but getting new exceptions bad request
java.io.IOException: Server returned HTTP response code: 400 for URL
and this is the demo telegram url
https://api.telegram.org/botid:TELEGRAM_TOKEN/sendMessage?parse_mode=HTML&chat_id=CHAT_ID&text=<b>Alert</b>%0A<b>Alert Name:</b> "REGISTER Violation"%0A<b>Severity:</b> "Medium"%0A<b>TimeStamp:</b> "2022-05-10 22:17:34.31"%0A<b>Event ID:</b> "160"%0A<b>Event Message:</b> "An unregistered User has been detected. This can be a Caller-ID poisoning or Number Harvesting attack. Only a valid registered user can make or receive calls"%0A<b>Source Contact:</b> "192.168.3.31:5077"%0A<b>Destination Contact:</b> "192.168.10.10:5555"%0A<b>Source IP:</b> "192.168.3.31"%0A<b>Destination IP:</b> "192.168.10.10"%0A<b>Source Ext:</b> "4545454545"%0A<b>Destination Ext:</b> "%2B43965272"%0A<b>Source Domain:</b> "n/a"%0A<b>Destination Domain:</b> "n/a"%0A<b>Protocol:</b> "SIP"%0A<b>Comment:</b> "None"%0A<b>Attack Name:</b> "REGISTER Violation"%0A<b>Method:</b> "INVITE"%0A<b>Source Country:</b> "Unknown"%0A<b>Destination Country:</b> "AUSTRIA"%0A<b>CallType:</b> "International"%0A<b>RiskScore:</b> "0"%0A<b>Client Name:</b> "Unknown:Unknown"%0A<b>Network Group Name:</b> "defaultNonVlanGroup"%0A<b>Acknowledged:</b> "No"%0A<b>Alert Category:</b> "External"%0A<b>UCTM Name:</b> "redshift"
and i tried manually by pasting url which shown in exception but its worked fine but in application its throwing this exception
Please help where i am doing wrong
You could just do a simple Thread.sleep(2000) in your loop. Might not scale too good
Or you could store all your messages in a synchronzied list (https://www.techiedelight.com/queue-implementation-in-java/) and make a scheduler that would read a message every x seconds, send it and delete it from the list. If your using Spring Boot this is pretty easy -> https://www.baeldung.com/spring-task-scheduler
I am trying to use PoolingHttpClientConnectionManager in our module.
Below is my code snippet
import java.io.IOException;
import org.apache.hc.client5.http.ClientProtocolException;
import org.apache.hc.client5.http.HttpRoute;
import org.apache.hc.client5.http.classic.HttpClient;
import org.apache.hc.client5.http.classic.methods.HttpGet;
import org.apache.hc.client5.http.impl.classic.CloseableHttpClient;
import org.apache.hc.client5.http.impl.classic.CloseableHttpResponse;
import org.apache.hc.client5.http.impl.classic.HttpClientBuilder;
import org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager;
import org.apache.hc.core5.http.HttpEntity;
import org.apache.hc.core5.http.HttpHost;
import org.apache.hc.core5.http.io.entity.EntityUtils;
public class Testme {
static PoolingHttpClientConnectionManager connectionManager;
static CloseableHttpClient httpClient;
public static void main(String args[]) {
connectionManager = new PoolingHttpClientConnectionManager();
connectionManager.setDefaultMaxPerRoute(3);
connectionManager.setMaxPerRoute(new HttpRoute(new HttpHost("http://127.0.0.1",8887)), 5);
httpClient = HttpClientBuilder.create().setConnectionManager(connectionManager).build();
System.out.println("Available connections "+connectionManager.getTotalStats().getAvailable());
System.out.println("Max Connections "+connectionManager.getTotalStats().getMax());
System.out.println("Number of routes "+connectionManager.getRoutes().size());
Testme testme = new Testme();
Testme.ThreadMe threads[] = new Testme.ThreadMe[5];
for(int i=0;i<5;i++)
threads[i] = testme.new ThreadMe();
for(Testme.ThreadMe thread:threads) {
System.out.println("Leased connections before assigning "+connectionManager.getTotalStats().getLeased());
thread.start();
}
}
class ThreadMe extends Thread{
#Override
public void run() {
try {
CloseableHttpResponse response= httpClient.execute(new HttpGet("http://127.0.0.1:8887"));
System.out.println("Req for "+Thread.currentThread().getName() + " executed with "+response);
try {
HttpEntity entity = response.getEntity();
EntityUtils.consume(entity);
}catch(IOException e) {
e.printStackTrace();
}
finally {
response.close();
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
the output i received was as below:
Available connections 0
Max Connections 25
Number of routes 0
Leased connections before assigning 0
Leased connections before assigning 0
Leased connections before assigning 0
Leased connections before assigning 0
Leased connections before assigning 0
Req for Thread-2 executed with 200 OK HTTP/1.1
Req for Thread-4 executed with 200 OK HTTP/1.1
Req for Thread-3 executed with 200 OK HTTP/1.1
Req for Thread-0 executed with 200 OK HTTP/1.1
Req for Thread-1 executed with 200 OK HTTP/1.1
I am unable to find my leased connections are always 0 though there are requests executing. Also, the routes are always shown as 0 though I have registered the route. It seems to me that there is some problem but I could not identify it. Available connections are also shown as 0 during execution(though it is not printed here). Please help me in finding out what went wrong. Thanks!
The first print of "Available connections" message is of course 0, as only after you'll get the response and it'll be decided, based on default strategies DefaultClientConnectionReuseStrategy & DefaultConnectionKeepAliveStrategy, if the connection should be reusable, and for how long, only then the connection will be moved to available connections list.
I'm guessing that the number of routes is also decided after at least one connection was created.
In your log you can see that the main thread printed all the "Leased connections before assigning" messages before the child threads ran, and so you see 0 leased connections.
A connection exists in leased connections list only from the time of creation until the releasing time, that usually happens on response.readEntity(), response.close(), the closing of the connection manager, and maybe on EntityUtils.consume() as well.
So maybe try to move the "Leased connections before assigning" from the main thread to child thread.
I am trying to implement a web socket server in java. The code for the server is :-
ServerSocket serverSocket = new ServerSocket();
SocketAddress socketAddress = new InetSocketAddress(9000);
LOGGER.info("Listening on port :: 9000");
serverSocket.bind(socketAddress,5 );
while (true) {
Socket socket = serverSocket.accept();
Processor processor = Processor.getInstance(processorCounter++);
try {
processor.begin(socket);
} catch (AppException e) {
LOGGER.info(e.getMessage(), e);
}
}
I am creating a new thread Processor which accepts the socket and begins to publish response on it without closing the connection. The connection is long-term and persistent.
When I hit this server with around 2000 requests, I observe that no more than 249 threads are getting created. The question is that why no more than 249 threads/processors are being spawned?
PS.:
requests are being sent from Google Chrome
Javascript that makes the request -
const func1=function(){
for(var i=1;i<=2000;i++){
var ws=new WebSocket("ws://localhost:9000");
ws.onopen=function(event){
ws.send("are you a teapot?! from client "+i);
};
ws.onmessage=function(event){
console.log("Server says : "+event.data);
};
ws.onerror=function(event){
console.log("error () -> "+JSON.stringify(event));
};
ws.onclose=function(event){
console.log("close () -> "+JSON.stringify(event));
};}
}
This code creates a new connection to the RESTful server for each request rather than just use the existing connection. How do I change the code, so that there is only one connection?
The line "response = oClientCloseable.execute(...)" not only does the task, but creates a connection.
I checked the server daemon log and the only activity generates from the .execute() method.
import org.apache.http.HttpResponse;
import org.apache.http.client.ClientProtocolException;
import org.apache.http.client.config.RequestConfig;
import org.apache.http.client.methods.HttpDelete;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.client.utils.HttpClientUtils;
import org.apache.http.conn.ConnectionPoolTimeoutException;
import org.apache.http.entity.StringEntity;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClientBuilder;
...
String pathPost = "http://someurl";
String pathDelete = "http://someurl2";
String xmlPost = "myxml";
HttpResponse response = null;
BufferedReader rd = null;
String line = null;
CloseableHttpClient oClientCloseable = HttpClientBuilder.create().setDefaultRequestConfig(defaultRequestConfig).build();
for (int iLoop = 0; iLoop < 25; iLoop++)
{
HttpPost hPost = new HttpPost(pathPost);
hPost.setHeader("Content-Type", "application/xml");
StringEntity se = new StringEntity(xmlPost);
hPost.setEntity(se);
line = "";
try
{
response = oClientCloseable.execute(hPost);
rd = new BufferedReader(new InputStreamReader(response.getEntity().getContent()));
while ((line = rd.readLine()) != null)
{
System.out.println(line);
}
}
catch (ClientProtocolException e)
{
e.printStackTrace();
}
catch (ConnectionPoolTimeoutException e)
{
e.printStackTrace();
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
HttpClientUtils.closeQuietly(response);
}
HttpDelete hDelete = new HttpDelete(pathDelete);
hDelete.setHeader("Content-Type", "application/xml");
try
{
response = oClientCloseable.execute(hDelete);
}
catch (ClientProtocolException e)
{
e.printStackTrace();
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
HttpClientUtils.closeQuietly(response);
}
}
oClientCloseable.close();
The server daemon log emits the following for whatever it is worth, when connecting.
HTTP connection from [192.168.20.86]...ALLOWED
POST [/linx] SIZE 248
LINK-18446744073709551615: 2 SEND-BMQs, 2 RECV-BMQs
THREAD-LINK_CONNECT-000, TID: 7F0F1B7FE700 READY
NODE connecting to [192.168.30.20]:9099...
LINK-0-CONTROL-NODE-0 connected to 192.168.30.20(192.168.30.20 IPv4 address: 192.168.30.20):9099
Auth accepted, protocol compatible
NODE connecting to [192.168.30.20]:9099...
This article seems the most relevant, as it talks about consuming (closing) connections, which ties in the response. That article is also out of date, as consumeContent is deprecated. It seems that response.close() is the proper way, but that closes the connection and a new response creates a new connection.
It seems that I need to somehow create one response to the serer daemon and then change action (get, post, put, or delete).
Thoughts on how the code should change?
Here are some other links that I used:
link 1
link 2
link 3
I implemented the suggestion of Robert Rowntree (sorry not sure to properly reference name) by replacing the beginning code with:
// Increase max total connection to 200 and increase default max connection per route to 20.
// Configure total max or per route limits for persistent connections
// that can be kept in the pool or leased by the connection manager.
PoolingHttpClientConnectionManager oConnectionMgr = new PoolingHttpClientConnectionManager();
oConnectionMgr.setMaxTotal(200);
oConnectionMgr.setDefaultMaxPerRoute(20);
oConnectionMgr.setMaxPerRoute(new HttpRoute(new HttpHost("192.168.20.120", 8080)), 20);
RequestConfig defaultRequestConfig = RequestConfig.custom()
.setSocketTimeout(5000)
.setConnectTimeout(5000)
.setConnectionRequestTimeout(5000)
.setStaleConnectionCheckEnabled(true)
.build();
//HttpClient client = HttpClientBuilder.create().setDefaultRequestConfig(defaultRequestConfig).build();
CloseableHttpClient oClientCloseable = HttpClientBuilder.create()
.setConnectionManager(oConnectionMgr)
.setDefaultRequestConfig(defaultRequestConfig)
.build();
I still saw the bunch of authenticates.
I contacted the vendor and shared with them the log using the modified version and my code was clean.
My test sample created a connection (to a remote server) followed by deleting the connection and repeating however many times. Their code dumps the authenticate message each time a connection creation request arrives.
I was pointed to what technically I already knew that the line that creates a new RESTful connection to the service is always "XXXXX connection allowed". There was one of those, two if you count my going to the browser based interface afterwards to make sure that all my links were gone.
Sadly, I am not sure that I can use the Apache client, so sad. Apache does not support message bodies inside a GET request. To the simple minded here (me, in this case), Apache does not allow:
GET http://www.example.com/whatevermethod:myport?arg1=data1&arg2=data2
Apache HttpClient --> HttpGet does not have a setEntities command. Research showed that as a POST request, but the service is the way that it is and will not change, so...
You can definitely use query parameters in Apache HttpClient:
URIBuilder builder = new URIBuilder("http://www.example.com/whatevermehtod");
builder.addParameter("arg1", "data1");
URI uri = builder.build();
HttpGet get = new HttpGet(uri);
I'm establishing a HttpURLConnection to a WebServer with basically the following two methods:
private HttpURLConnection establishConnection(URL url) {
HttpURLConnection conn = null;
try {
conn = (HttpURLConnection) url.openConnection();
conn = authenticate(conn);
conn.setRequestMethod(httpMethod);
conn.setConnectTimeout(50000);
conn.connect();
input= conn.getInputStream();
return conn;
} catch (IOException e1) {
e1.printStackTrace();
}
return null;
}
private HttpURLConnection authenticate(HttpURLConnection conn) {
String userpass = webServiceUserName + ":" + webServicePassword;
byte[] authEncBytes = Base64.encodeBase64(userpass.getBytes());
String authStringEnc = new String(authEncBytes);
conn.setRequestProperty("Authorization", "Basic " + authStringEnc);
return conn;
}
This works quite well, the Server is sending some XML-File and I can continue with it. The Problem I'm encountering is, i have to do about ~220 of these and they add up to about 25s processing time. The data is used in a WebPage, so 25s response time is not really acceptable.
The code above takes about: 86000036ns (~86ms), so im searching for a way to improve the speed somehow. I tried using the org.apache.http.* package, but that was a bit slower than my current implementation.
Thanks
Markus
Edit: input=conn.getInputStream();
Is responsible for ~82-85ms of that delay. Is there anyway "around" it?
Edit2: I used the Connection Manager aswell
PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
cm.setMaxTotal(200);
cm.setDefaultMaxPerRoute(20);
HttpHost localhost = new HttpHost(webServiceHostName, 443);
cm.setMaxPerRoute(new HttpRoute(localhost), 50);
CredentialsProvider credsProvider = new BasicCredentialsProvider();
credsProvider.setCredentials(
new AuthScope(webServiceHostName, 443),
new UsernamePasswordCredentials(webServiceUserName, webServicePassword));
httpclient = HttpClients.custom().setConnectionManager(cm).setDefaultCredentialsProvider(credsProvider).build();
But the runtime increases to ~40s and i get a Warning from my Tomcat after every request that the Cookie was rejeceted because of a "Illegal path attribute"
You may be able to get a substantial boost by downloading a number of files in parallel.
I had a project where I had to download 20 resources from a server over a satellite backhaul (around 700ms round-trip delay). Downloading them sequentially took around 30 seconds; 5 at a time took 6.5 seconds, 10 at a time took 3.5 seconds, and all 20 at once was a bit over 2.5 seconds.
Here is an example which will perform multiple downloads concurrently, and if support by the server, will use connection keep-alive.
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import org.apache.http.HttpEntity;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.impl.conn.PoolingHttpClientConnectionManager;
import org.apache.http.protocol.BasicHttpContext;
import org.apache.http.protocol.HttpContext;
import org.apache.http.util.EntityUtils;
public class Downloader {
private static final int MAX_REQUESTS_PER_ROUTE = 10;
private static final int MAX_REQUESTS_TOTAL = 50;
private static final int MAX_THREAD_DONE_WAIT = 60000;
public static void main(String[] args) throws IOException,
InterruptedException {
long startTime = System.currentTimeMillis();
// create connection manager and http client
PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
cm.setDefaultMaxPerRoute(MAX_REQUESTS_PER_ROUTE);
cm.setMaxTotal(MAX_REQUESTS_TOTAL);
CloseableHttpClient httpclient = HttpClients.custom()
.setConnectionManager(cm).build();
// list of download items
List<DownloadItem> items = new ArrayList<DownloadItem>();
items.add(new DownloadItem("http://www.example.com/file1.xml"));
items.add(new DownloadItem("http://www.example.com/file2.xml"));
items.add(new DownloadItem("http://www.example.com/file3.xml"));
items.add(new DownloadItem("http://www.example.com/file4.xml"));
// create and start download threads
DownloadThread[] threads = new DownloadThread[items.size()];
for (int i = 0; i < items.size(); i++) {
threads[i] = new DownloadThread(httpclient, items.get(i));
threads[i].start();
}
// wait for all threads to complete
for (int i = 0; i < items.size(); i++) {
threads[i].join(MAX_THREAD_DONE_WAIT);
}
// use content
for (DownloadItem item : items) {
System.out.println("uri: " + item.uri + ", status-code: "
+ item.statusCode + ", content-length: "
+ item.content.length);
}
// done with http client
httpclient.close();
System.out.println("Time to download: "
+ (System.currentTimeMillis() - startTime) + "ms");
}
static class DownloadItem {
String uri;
byte[] content;
int statusCode;
DownloadItem(String uri) {
this.uri = uri;
content = null;
statusCode = -1;
}
}
static class DownloadThread extends Thread {
private final CloseableHttpClient httpClient;
private final DownloadItem item;
public DownloadThread(CloseableHttpClient httpClient, DownloadItem item) {
this.httpClient = httpClient;
this.item = item;
}
#Override
public void run() {
try {
HttpGet httpget = new HttpGet(item.uri);
HttpContext context = new BasicHttpContext();
CloseableHttpResponse response = httpClient.execute(httpget,
context);
try {
item.statusCode = response.getStatusLine().getStatusCode();
HttpEntity entity = response.getEntity();
if (entity != null) {
item.content = EntityUtils.toByteArray(entity);
}
} finally {
response.close();
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
Without knowing what kind of work your web request do, I assume that more than 99% of the 25 seconds consist of network time, and waiting around for various resources to respond (disk systems, LDAP servers, Name servers etc.).
The Speed of Light
I see you use userid/password against the webserver. Is this an external webserver? If so, the network distance itself could account for the 86 ms. With many request you start to feel the restriction of speed of light.
The way to optimize you program is to minimize all the waiting time stacking up. This might be done by running requests in parallel, or by allowing for multiple request in one request (if you can change on the web server).
Connection pooling itself won't solve the problem if you still run the requests in sequence.
An possible solution
Base on further description in comments you might use the following sequence:
Request the overview XML.
Extract list of devices from overview XML.
Request device details for all devices in parallel.
Collect responses from all requests.
Run through XML again, and this time update with the responses.