This code creates a new connection to the RESTful server for each request rather than just use the existing connection. How do I change the code, so that there is only one connection?
The line "response = oClientCloseable.execute(...)" not only does the task, but creates a connection.
I checked the server daemon log and the only activity generates from the .execute() method.
import org.apache.http.HttpResponse;
import org.apache.http.client.ClientProtocolException;
import org.apache.http.client.config.RequestConfig;
import org.apache.http.client.methods.HttpDelete;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.client.utils.HttpClientUtils;
import org.apache.http.conn.ConnectionPoolTimeoutException;
import org.apache.http.entity.StringEntity;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClientBuilder;
...
String pathPost = "http://someurl";
String pathDelete = "http://someurl2";
String xmlPost = "myxml";
HttpResponse response = null;
BufferedReader rd = null;
String line = null;
CloseableHttpClient oClientCloseable = HttpClientBuilder.create().setDefaultRequestConfig(defaultRequestConfig).build();
for (int iLoop = 0; iLoop < 25; iLoop++)
{
HttpPost hPost = new HttpPost(pathPost);
hPost.setHeader("Content-Type", "application/xml");
StringEntity se = new StringEntity(xmlPost);
hPost.setEntity(se);
line = "";
try
{
response = oClientCloseable.execute(hPost);
rd = new BufferedReader(new InputStreamReader(response.getEntity().getContent()));
while ((line = rd.readLine()) != null)
{
System.out.println(line);
}
}
catch (ClientProtocolException e)
{
e.printStackTrace();
}
catch (ConnectionPoolTimeoutException e)
{
e.printStackTrace();
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
HttpClientUtils.closeQuietly(response);
}
HttpDelete hDelete = new HttpDelete(pathDelete);
hDelete.setHeader("Content-Type", "application/xml");
try
{
response = oClientCloseable.execute(hDelete);
}
catch (ClientProtocolException e)
{
e.printStackTrace();
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
HttpClientUtils.closeQuietly(response);
}
}
oClientCloseable.close();
The server daemon log emits the following for whatever it is worth, when connecting.
HTTP connection from [192.168.20.86]...ALLOWED
POST [/linx] SIZE 248
LINK-18446744073709551615: 2 SEND-BMQs, 2 RECV-BMQs
THREAD-LINK_CONNECT-000, TID: 7F0F1B7FE700 READY
NODE connecting to [192.168.30.20]:9099...
LINK-0-CONTROL-NODE-0 connected to 192.168.30.20(192.168.30.20 IPv4 address: 192.168.30.20):9099
Auth accepted, protocol compatible
NODE connecting to [192.168.30.20]:9099...
This article seems the most relevant, as it talks about consuming (closing) connections, which ties in the response. That article is also out of date, as consumeContent is deprecated. It seems that response.close() is the proper way, but that closes the connection and a new response creates a new connection.
It seems that I need to somehow create one response to the serer daemon and then change action (get, post, put, or delete).
Thoughts on how the code should change?
Here are some other links that I used:
link 1
link 2
link 3
I implemented the suggestion of Robert Rowntree (sorry not sure to properly reference name) by replacing the beginning code with:
// Increase max total connection to 200 and increase default max connection per route to 20.
// Configure total max or per route limits for persistent connections
// that can be kept in the pool or leased by the connection manager.
PoolingHttpClientConnectionManager oConnectionMgr = new PoolingHttpClientConnectionManager();
oConnectionMgr.setMaxTotal(200);
oConnectionMgr.setDefaultMaxPerRoute(20);
oConnectionMgr.setMaxPerRoute(new HttpRoute(new HttpHost("192.168.20.120", 8080)), 20);
RequestConfig defaultRequestConfig = RequestConfig.custom()
.setSocketTimeout(5000)
.setConnectTimeout(5000)
.setConnectionRequestTimeout(5000)
.setStaleConnectionCheckEnabled(true)
.build();
//HttpClient client = HttpClientBuilder.create().setDefaultRequestConfig(defaultRequestConfig).build();
CloseableHttpClient oClientCloseable = HttpClientBuilder.create()
.setConnectionManager(oConnectionMgr)
.setDefaultRequestConfig(defaultRequestConfig)
.build();
I still saw the bunch of authenticates.
I contacted the vendor and shared with them the log using the modified version and my code was clean.
My test sample created a connection (to a remote server) followed by deleting the connection and repeating however many times. Their code dumps the authenticate message each time a connection creation request arrives.
I was pointed to what technically I already knew that the line that creates a new RESTful connection to the service is always "XXXXX connection allowed". There was one of those, two if you count my going to the browser based interface afterwards to make sure that all my links were gone.
Sadly, I am not sure that I can use the Apache client, so sad. Apache does not support message bodies inside a GET request. To the simple minded here (me, in this case), Apache does not allow:
GET http://www.example.com/whatevermethod:myport?arg1=data1&arg2=data2
Apache HttpClient --> HttpGet does not have a setEntities command. Research showed that as a POST request, but the service is the way that it is and will not change, so...
You can definitely use query parameters in Apache HttpClient:
URIBuilder builder = new URIBuilder("http://www.example.com/whatevermehtod");
builder.addParameter("arg1", "data1");
URI uri = builder.build();
HttpGet get = new HttpGet(uri);
Related
i have an api which receives data from a source and send them to telegram bot.
i receive data in bulk from my source and will send them to telegram bot with that rate but telegram can handle only 1 message per sec so eventually it returning this exception
java.io.IOException: Server returned HTTP response code: 429 for URL:....
is there a way to store messages in list and iterate this list from a thread
am trying to learn java so please don't mind if my code not good.
Sample.java
class Sample{
run(){
while(true){
//some operations
SendMessage.getInstance().sendToTelegram(clientCommand);
//
}
}
}
SendMessage.java
import java.io.BufferedInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.net.URL;
import java.net.URLConnection;
public class SendMessage {
static SendMessage getInstance() {
return instance;
}
public void sendToTelegram(String message) {
String urlString = "https://api.telegram.org/;
String apiToken = obj.getInstance().getTelegramToken();
String chatId = obj.getInstance().getChatId();
String text = message;
urlString = urlString+"/bot"+apiToken+"/sendMessage?parse_mode=HTML&chat_id="+chatId+"&text="+msgToSend;
try {
URL url = new URL(urlString);
URLConnection conn = url.openConnection();
InputStream is = new BufferedInputStream(conn.getInputStream());
BufferedReader br = new BufferedReader(new InputStreamReader(is));
String inputLine = "";
StringBuilder sb = new StringBuilder();
while ((inputLine = br.readLine()) != null) {
sb.append(inputLine);
sb.append('\r');
}
br.close();
} catch (IOException e) {
log.error(e);
}
}
}
if thread concept works can anyone please help me how to add into a list and send them to telegram bot without loosing data
by using sleeping thread am not getting 429 too many responses exception
class Sample{
run(){
while(true){
//some operations
SendMessage.getInstance().sendToTelegram(clientCommand);
Thread.sleep(2000);
}
}
}
but getting new exceptions bad request
java.io.IOException: Server returned HTTP response code: 400 for URL
and this is the demo telegram url
https://api.telegram.org/botid:TELEGRAM_TOKEN/sendMessage?parse_mode=HTML&chat_id=CHAT_ID&text=<b>Alert</b>%0A<b>Alert Name:</b> "REGISTER Violation"%0A<b>Severity:</b> "Medium"%0A<b>TimeStamp:</b> "2022-05-10 22:17:34.31"%0A<b>Event ID:</b> "160"%0A<b>Event Message:</b> "An unregistered User has been detected. This can be a Caller-ID poisoning or Number Harvesting attack. Only a valid registered user can make or receive calls"%0A<b>Source Contact:</b> "192.168.3.31:5077"%0A<b>Destination Contact:</b> "192.168.10.10:5555"%0A<b>Source IP:</b> "192.168.3.31"%0A<b>Destination IP:</b> "192.168.10.10"%0A<b>Source Ext:</b> "4545454545"%0A<b>Destination Ext:</b> "%2B43965272"%0A<b>Source Domain:</b> "n/a"%0A<b>Destination Domain:</b> "n/a"%0A<b>Protocol:</b> "SIP"%0A<b>Comment:</b> "None"%0A<b>Attack Name:</b> "REGISTER Violation"%0A<b>Method:</b> "INVITE"%0A<b>Source Country:</b> "Unknown"%0A<b>Destination Country:</b> "AUSTRIA"%0A<b>CallType:</b> "International"%0A<b>RiskScore:</b> "0"%0A<b>Client Name:</b> "Unknown:Unknown"%0A<b>Network Group Name:</b> "defaultNonVlanGroup"%0A<b>Acknowledged:</b> "No"%0A<b>Alert Category:</b> "External"%0A<b>UCTM Name:</b> "redshift"
and i tried manually by pasting url which shown in exception but its worked fine but in application its throwing this exception
Please help where i am doing wrong
You could just do a simple Thread.sleep(2000) in your loop. Might not scale too good
Or you could store all your messages in a synchronzied list (https://www.techiedelight.com/queue-implementation-in-java/) and make a scheduler that would read a message every x seconds, send it and delete it from the list. If your using Spring Boot this is pretty easy -> https://www.baeldung.com/spring-task-scheduler
I'm creating an apache FTPS client (because the remote server won't allow plain FTP). I can connect and delete files without problem, but when using retrieveFile() or retrieveFileStream(), it hangs.
For some reason, very small files do transfer (up to 5792 bytes), but anything else gives the following PrintCommandListener output:
run:
220---------- Welcome to Pure-FTPd [privsep] [TLS] ----------
220-You are user number 2 of 50 allowed.
220-Local time is now 19:42. Server port: 21.
220-This is a private system - No anonymous login
220-IPv6 connections are also welcome on this server.
220 You will be disconnected after 15 minutes of inactivity.
AUTH TLS
234 AUTH TLS OK.
USER
331 User OK. Password required
PASS
230 OK. Current restricted directory is /
TYPE A
200 TYPE is now ASCII
EPSV
229 Extended Passive mode OK (|||53360|)
RETR test.txt
150-Accepted data connection
150 7.3 kbytes to download
Here is the code:
try {
FTPSClient ftpClient = new FTPSClient("tls",false);
ftpClient.addProtocolCommandListener(new PrintCommandListener(new PrintWriter(System.out)));
ftpClient.connect(host, port);
int reply = ftpClient.getReplyCode();
if (FTPReply.isPositiveCompletion(reply)) {
ftpClient.enterLocalPassiveMode();
ftpClient.login(username, password);
ftpClient.enterLocalPassiveMode();
FileOutputStream outputStream = new FileOutputStream(tempfile);
ftpClient.setFileType(FTPClient.ASCII_FILE_TYPE);
ftpClient.retrieveFile("test.txt", outputStream);
outputStream.close();
ftpClient.logout();
ftpClient.disconnect();
}
} catch (IOException ioe) {
System.out.println("FTP client received network error");
}
Any ideas are greatly appreciated.
Typically the FTP command sequence for FTPS connections goes (per RFC 4217) AUTH TLS, PBSZ 0, then USER, PASS, etc. Thus you might try:
FTPSClient ftpClient = new FTPSClient("tls",false);
ftpClient.addProtocolCommandListener(new PrintCommandListener(new PrintWriter(System.out)));
ftpClient.connect(host, port);
int reply = ftpClient.getReplyCode();
if (FTPReply.isPositiveCompletion(reply)) {
ftpClient.execPBSZ(0);
reply = ftpClient.getReplyCode();
// Check for PBSZ error responses...
ftpClient.execPROT("P");
reply = ftpClient.getReplyCode();
// Check for PROT error responses...
ftpClient.enterLocalPassiveMode();
This explictly tells the server to not buffer the data connection (PBSZ 0), and to use TLS for protecting the data transfer (PROT P).
The fact that you are able to transfer some bytes indicates that the issue is not the usual complication with firewalls/routers/NAT, which is another common FTPS issue.
Hope this helps!
Even if PBSZ 0 and PROT P are called in the correct sequence, sometimes the server does require SSL session reuse which is not the case by default for the client.
For example, the following reply comes when trying to list a directory. As a result no content listing is returned, this way the client seeing as if the directory is empty:
LIST /
150 Here comes the directory listing.
522 SSL connection failed; session reuse required: see require_ssl_reuse option in sftpd.conf man page
To overcome that, custom initialization of the FTPSClient is needed by overriding _prepareDataSocket_() method.
The solution is explained in details here: https://eng.wealthfront.com/2016/06/10/connecting-to-an-ftps-server-with-ssl-session-reuse-in-java-7-and-8/
Working code sample taken from the above link:
import java.io.IOException;
import java.lang.reflect.Field;
import java.lang.reflect.Method;
import java.net.Socket;
import java.util.Locale;
import javax.net.ssl.SSLSession;
import javax.net.ssl.SSLSessionContext;
import javax.net.ssl.SSLSocket;
import org.apache.commons.net.ftp.FTPSClient;
import com.google.common.base.Throwables;
public class SSLSessionReuseFTPSClient extends FTPSClient {
// adapted from: https://trac.cyberduck.io/changeset/10760
#Override
protected void _prepareDataSocket_(final Socket socket) throws IOException {
if(socket instanceof SSLSocket) {
final SSLSession session = ((SSLSocket) _socket_).getSession();
final SSLSessionContext context = session.getSessionContext();
try {
final Field sessionHostPortCache = context.getClass().getDeclaredField("sessionHostPortCache");
sessionHostPortCache.setAccessible(true);
final Object cache = sessionHostPortCache.get(context);
final Method putMethod = cache.getClass().getDeclaredMethod("put",Object.class, Object.class);
putMethod.setAccessible(true);
final Method getHostMethod = socket.getClass().getDeclaredMethod("getHost");
getHostMethod.setAccessible(true);
Object host = getHostMethod.invoke(socket);
final String key = String.format("%s:%s", host, String.valueOf(socket.getPort())).toLowerCase(Locale.ROOT);
putMethod.invoke(cache, key, session);
} catch(Exception e) {
throw Throwables.propagate(e);
}
}
}
}
Hope someone finds my comment useful after several years.
In my case, I replaced retrieveFile with retrieveFileStream. It requires more code, but at least it works.
For me, I fixed the problem after upgrading Apache Commons Net to 3.8.0.
dependencies {
implementation 'commons-net:commons-net:3.8.0'
...
}
CloseableHttpResponse response = null;
try {
// do some thing ....
HttpPost request = new HttpPost("some url");
response = getHttpClient().execute(request);
// do some other thing ....
} catch(Exception e) {
// deal with exception
} finally {
if(response != null) {
try {
response.close(); // (1)
} catch(Exception e) {}
request.releaseConnection(); // (2)
}
}
I've made a http request like above.
In order to release the underlying connection, is it correct to call (1) and (2)? and what's the difference between the two invocation?
Short answer:
request.releaseConnection() is releasing the underlying HTTP connection to allow it to be reused. response.close() is closing a stream (not a connection), this stream is the response content we are streaming from the network socket.
Long Answer:
The correct pattern to follow in any recent version > 4.2 and probably even before that, is not to use releaseConnection.
request.releaseConnection() releases the underlying httpConnection so the request can be reused, however the Java doc says:
A convenience method to simplify migration from HttpClient 3.1 API...
Instead of releasing the connection, we ensure the response content is fully consumed which in turn ensures the connection is released and ready for reuse. A short example is shown below:
CloseableHttpClient httpclient = HttpClients.createDefault();
HttpGet httpGet = new HttpGet("http://targethost/homepage");
CloseableHttpResponse response1 = httpclient.execute(httpGet);
try {
System.out.println(response1.getStatusLine());
HttpEntity entity1 = response1.getEntity();
// do something useful with the response body
String bodyAsString = EntityUtils.toString(exportResponse.getEntity());
System.out.println(bodyAsString);
// and ensure it is fully consumed (this is how stream is released.
EntityUtils.consume(entity1);
} finally {
response1.close();
}
CloseableHttpResponse.close() closes the tcp socket
HttpPost.releaseConnection() closes the tcp socket
EntityUtils.consume(response.getEntity()) allows you to re-use the tcp socket
Details
CloseableHttpResponse.close() closes the tcp socket, preventing the connection from being re-used. You need to establish a new tcp connection in order to initiate another request.
This is the call chain that lead me to the above conclusion:
HttpResponseProxy.close()
-> ConnectionHolder.close()
-> ConnectionHolder.releaseConnection(reusable=false)
-> managedConn.close()
-> BHttpConnectionBase.close()
-> Socket.close()
HttpPost.releaseConnection() also closes the Socket. This is the call chain that lead me to the above conclusion:
HttpPost.releaseConnection()
HttpRequestBase.releaseConnect()
AbstractExecutionAwareRequest.reset()
ConnectionHolder.cancel() (
ConnectionHolder.abortConnection()
HttpConnection.shutdown()
Here is experimental code that also demonstrates the above three facts:
import java.lang.reflect.Constructor;
import java.net.Socket;
import java.net.SocketImpl;
import java.net.SocketImplFactory;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClientBuilder;
import org.apache.http.util.EntityUtils;
public class Main {
private static SocketImpl newSocketImpl() {
try {
Class<?> defaultSocketImpl = Class.forName("java.net.SocksSocketImpl");
Constructor<?> constructor = defaultSocketImpl.getDeclaredConstructor();
constructor.setAccessible(true);
return (SocketImpl) constructor.newInstance();
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public static void main(String[] args) throws Exception {
// this is a hack that lets me listen to Tcp socket creation
final List<SocketImpl> allSockets = Collections.synchronizedList(new ArrayList<>());
Socket.setSocketImplFactory(new SocketImplFactory() {
public SocketImpl createSocketImpl() {
SocketImpl socket = newSocketImpl();
allSockets.add(socket);
return socket;
}
});
System.out.println("num of sockets after start: " + allSockets.size());
CloseableHttpClient client = HttpClientBuilder.create().build();
System.out.println("num of sockets after client created: " + allSockets.size());
HttpGet request = new HttpGet("http://www.google.com");
System.out.println("num of sockets after get created: " + allSockets.size());
CloseableHttpResponse response = client.execute(request);
System.out.println("num of sockets after get executed: " + allSockets.size());
response.close();
System.out.println("num of sockets after response closed: " + allSockets.size());
response = client.execute(request);
System.out.println("num of sockets after request executed again: " + allSockets.size());
request.releaseConnection();
System.out.println("num of sockets after release connection: " + allSockets.size());
response = client.execute(request);
System.out.println("num of sockets after request executed again for 3rd time: " + allSockets.size());
EntityUtils.consume(response.getEntity());
System.out.println("num of sockets after entityConsumed: " + allSockets.size());
response = client.execute(request);
System.out.println("num of sockets after request executed again for 4th time: " + allSockets.size());
}
}
pom.xml
<project>
<modelVersion>4.0.0</modelVersion>
<groupId>org.joseph</groupId>
<artifactId>close.vs.release.conn</artifactId>
<version>1.0.0</version>
<properties>
<maven.compiler.target>1.8</maven.compiler.target>
<maven.compiler.source>1.8</maven.compiler.source>
</properties>
<build>
<plugins>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.5.13</version>
</dependency>
</dependencies>
</project>
Output:
num of sockets after start: 0
num of sockets after client created: 0
num of sockets after get created: 0
num of sockets after get executed: 1
num of sockets after response closed: 1
num of sockets after request executed again: 2
num of sockets after release connection: 2
num of sockets after request executed again for 3rd time: 3
num of sockets after entityConsumed: 3
num of sockets after request executed again for 4th time: 3
Notice that both .close() and .releaseConnection() both result in a new tcp connection. Only consuming the entity allows you to re-use the tcp connection.
If you want the connect to be re-usable after each request, then you need to do what #Matt recommended and consume the entity.
I'm establishing a HttpURLConnection to a WebServer with basically the following two methods:
private HttpURLConnection establishConnection(URL url) {
HttpURLConnection conn = null;
try {
conn = (HttpURLConnection) url.openConnection();
conn = authenticate(conn);
conn.setRequestMethod(httpMethod);
conn.setConnectTimeout(50000);
conn.connect();
input= conn.getInputStream();
return conn;
} catch (IOException e1) {
e1.printStackTrace();
}
return null;
}
private HttpURLConnection authenticate(HttpURLConnection conn) {
String userpass = webServiceUserName + ":" + webServicePassword;
byte[] authEncBytes = Base64.encodeBase64(userpass.getBytes());
String authStringEnc = new String(authEncBytes);
conn.setRequestProperty("Authorization", "Basic " + authStringEnc);
return conn;
}
This works quite well, the Server is sending some XML-File and I can continue with it. The Problem I'm encountering is, i have to do about ~220 of these and they add up to about 25s processing time. The data is used in a WebPage, so 25s response time is not really acceptable.
The code above takes about: 86000036ns (~86ms), so im searching for a way to improve the speed somehow. I tried using the org.apache.http.* package, but that was a bit slower than my current implementation.
Thanks
Markus
Edit: input=conn.getInputStream();
Is responsible for ~82-85ms of that delay. Is there anyway "around" it?
Edit2: I used the Connection Manager aswell
PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
cm.setMaxTotal(200);
cm.setDefaultMaxPerRoute(20);
HttpHost localhost = new HttpHost(webServiceHostName, 443);
cm.setMaxPerRoute(new HttpRoute(localhost), 50);
CredentialsProvider credsProvider = new BasicCredentialsProvider();
credsProvider.setCredentials(
new AuthScope(webServiceHostName, 443),
new UsernamePasswordCredentials(webServiceUserName, webServicePassword));
httpclient = HttpClients.custom().setConnectionManager(cm).setDefaultCredentialsProvider(credsProvider).build();
But the runtime increases to ~40s and i get a Warning from my Tomcat after every request that the Cookie was rejeceted because of a "Illegal path attribute"
You may be able to get a substantial boost by downloading a number of files in parallel.
I had a project where I had to download 20 resources from a server over a satellite backhaul (around 700ms round-trip delay). Downloading them sequentially took around 30 seconds; 5 at a time took 6.5 seconds, 10 at a time took 3.5 seconds, and all 20 at once was a bit over 2.5 seconds.
Here is an example which will perform multiple downloads concurrently, and if support by the server, will use connection keep-alive.
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import org.apache.http.HttpEntity;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.impl.conn.PoolingHttpClientConnectionManager;
import org.apache.http.protocol.BasicHttpContext;
import org.apache.http.protocol.HttpContext;
import org.apache.http.util.EntityUtils;
public class Downloader {
private static final int MAX_REQUESTS_PER_ROUTE = 10;
private static final int MAX_REQUESTS_TOTAL = 50;
private static final int MAX_THREAD_DONE_WAIT = 60000;
public static void main(String[] args) throws IOException,
InterruptedException {
long startTime = System.currentTimeMillis();
// create connection manager and http client
PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
cm.setDefaultMaxPerRoute(MAX_REQUESTS_PER_ROUTE);
cm.setMaxTotal(MAX_REQUESTS_TOTAL);
CloseableHttpClient httpclient = HttpClients.custom()
.setConnectionManager(cm).build();
// list of download items
List<DownloadItem> items = new ArrayList<DownloadItem>();
items.add(new DownloadItem("http://www.example.com/file1.xml"));
items.add(new DownloadItem("http://www.example.com/file2.xml"));
items.add(new DownloadItem("http://www.example.com/file3.xml"));
items.add(new DownloadItem("http://www.example.com/file4.xml"));
// create and start download threads
DownloadThread[] threads = new DownloadThread[items.size()];
for (int i = 0; i < items.size(); i++) {
threads[i] = new DownloadThread(httpclient, items.get(i));
threads[i].start();
}
// wait for all threads to complete
for (int i = 0; i < items.size(); i++) {
threads[i].join(MAX_THREAD_DONE_WAIT);
}
// use content
for (DownloadItem item : items) {
System.out.println("uri: " + item.uri + ", status-code: "
+ item.statusCode + ", content-length: "
+ item.content.length);
}
// done with http client
httpclient.close();
System.out.println("Time to download: "
+ (System.currentTimeMillis() - startTime) + "ms");
}
static class DownloadItem {
String uri;
byte[] content;
int statusCode;
DownloadItem(String uri) {
this.uri = uri;
content = null;
statusCode = -1;
}
}
static class DownloadThread extends Thread {
private final CloseableHttpClient httpClient;
private final DownloadItem item;
public DownloadThread(CloseableHttpClient httpClient, DownloadItem item) {
this.httpClient = httpClient;
this.item = item;
}
#Override
public void run() {
try {
HttpGet httpget = new HttpGet(item.uri);
HttpContext context = new BasicHttpContext();
CloseableHttpResponse response = httpClient.execute(httpget,
context);
try {
item.statusCode = response.getStatusLine().getStatusCode();
HttpEntity entity = response.getEntity();
if (entity != null) {
item.content = EntityUtils.toByteArray(entity);
}
} finally {
response.close();
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
Without knowing what kind of work your web request do, I assume that more than 99% of the 25 seconds consist of network time, and waiting around for various resources to respond (disk systems, LDAP servers, Name servers etc.).
The Speed of Light
I see you use userid/password against the webserver. Is this an external webserver? If so, the network distance itself could account for the 86 ms. With many request you start to feel the restriction of speed of light.
The way to optimize you program is to minimize all the waiting time stacking up. This might be done by running requests in parallel, or by allowing for multiple request in one request (if you can change on the web server).
Connection pooling itself won't solve the problem if you still run the requests in sequence.
An possible solution
Base on further description in comments you might use the following sequence:
Request the overview XML.
Extract list of devices from overview XML.
Request device details for all devices in parallel.
Collect responses from all requests.
Run through XML again, and this time update with the responses.
I have a list of screen names on Twitter and I wish to get meta data about their twitter profile. I am using Twitter's REST API for the same. The users/show method is apt for my task. The API documentation clearly states that it requires no authentication. Here's the code I wrote for my task:
package Twitter;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.MalformedURLException;
import java.net.URL;
import java.net.URLConnection;
public class TwitterAPI {
private static String url = "http://api.twitter.com/1/users/show/";
/*
* Sends a HTTP GET request to a URL
* #return - The response from the end point
*/
public static String sendGetRequest(String endpoint, String screen_name) {
String result = null;
if (endpoint.startsWith("http://")){
//Send HTTP request to the servlet
try {
//Construct data
StringBuffer data = new StringBuffer();
//Send data
String urlStr = endpoint ;
if(screen_name!=null && screen_name.length() > 0){
urlStr += screen_name + ".json";
}
System.out.println(screen_name.length());
System.out.println("The URL call is: " + urlStr);
URL url = new URL(urlStr);
URLConnection conn = url.openConnection ();
//Get the response
BufferedReader rd = new BufferedReader(new InputStreamReader(conn.getInputStream()));
StringBuffer sb = new StringBuffer();
String line;
while((line = rd.readLine())!=null){
sb.append(line);
}
rd.close();
result = sb.toString();
} catch (MalformedURLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
//If API issue, collect screen names to write to API issue file
System.out.println("Twitter API issue :" + screen_name);
// TODO Auto-generated catch block
e.printStackTrace();
}
}
return result;
}
/**
* #param args
*/
public static void main(String[] args) {
// TODO Auto-generated method stub
String result = sendGetRequest(url, "denzil_correa");
System.out.println(result);
}
}
However, on running the same I receive the following exception :
13
The URL call is: http://api.twitter.com/1/users/show/denzil_correa.json
Twitter API issue :denzil_correa
java.net.ConnectException: Connection timed out: connect
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:365)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:227)
null
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:214)
at java.net.Socket.connect(Socket.java:531)
at java.net.Socket.connect(Socket.java:481)
at sun.net.NetworkClient.doConnect(NetworkClient.java:157)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:394)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:529)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:233)
at sun.net.www.http.HttpClient.New(HttpClient.java:306)
at sun.net.www.http.HttpClient.New(HttpClient.java:323)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:783)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:724)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:649)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:972)
at Twitter.TwitterAPI.sendGetRequest(TwitterAPI.java:43)
at Twitter.TwitterAPI.main(TwitterAPI.java:76)
The URL is correct as when I try the URL : http://api.twitter.com/1/users/show/denzil_correa.json in my browser I receive the following:
{"time_zone":"Mumbai","description":"","lang":"en","profile_link_color":"1F98C7","status":{"coordinates":null,"contributors":null,"in_reply_to_screen_name":"shailaja","truncated":false,"in_reply_to_user_id":14089830,"in_reply_to_status_id":16789217674,"source":"web","created_at":"Tue Jun 22 19:43:46 +0000 2010","place":null,"geo":null,"favorited":false,"id":16793898396,"text":"#shailaja Harsh !"},"profile_background_image_url":"http://s.twimg.com/a/1276711174/images/themes/theme2/bg.gif","profile_sidebar_fill_color":"DAECF4","following":false,"profile_background_tile":false,"created_at":"Sun Jun 29 20:23:29 +0000 2008","statuses_count":1157,"profile_sidebar_border_color":"C6E2EE","profile_use_background_image":true,"followers_count":169,"contributors_enabled":false,"notifications":false,"friends_count":246,"protected":false,"url":"http://https://sites.google.com/a/iiitd.ac.in/denzilc/","profile_image_url":"http://a3.twimg.com/profile_images/643636081/Cofee_Mug_normal.jpg","geo_enabled":true,"profile_background_color":"C6E2EE","name":"Denzil Correa","favourites_count":3,"location":"India","screen_name":"denzil_correa","id":15273105,"verified":false,"utc_offset":19800,"profile_text_color":"663B12"}
which is in the JSON format I want.
Kindly let me know if I am doing anything stupid here.
Regards,
--Denzil
Hank/Splix as told I tried using the HTTP Components Client. Here's my modified code :
package Twitter;
import java.io.IOException;
import org.apache.http.HttpResponse;
import org.apache.http.client.ClientProtocolException;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.DefaultHttpClient;
public class TwitterAPI {
private static String url = "http://api.twitter.com/1/users/show/denzil_correa.json";
/**
* #param args
*/
public static void main(String[] args) {
// TODO Auto-generated method stub
HttpClient httpclient = new DefaultHttpClient();
HttpGet httpget = new HttpGet(url);
try {
HttpResponse response = httpclient.execute(httpget);
System.out.println(response.toString());
} catch (ClientProtocolException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
Here's the error I receive:
org.apache.http.conn.HttpHostConnectException: Connection to http://api.twitter.com refused
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:127)
at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:147)
at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:108)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:415)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:641)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:576)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:554)
at Twitter.TwitterAPI.main(TwitterAPI.java:30)
Caused by: java.net.ConnectException: Connection timed out: connect
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:365)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:227)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:214)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:378)
at java.net.Socket.connect(Socket.java:531)
at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:123)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:123)
... 7 more
Surprisingly this also gives a similar exception to the code written for handling HTTP responses manually. I understand that manually handling HTTP responses may be sub-optimal but currently I am not looking at writing optimal code. I would like to get my task done even if it means to be quick & dirty.
Just to let you know, I can successfully call the Facebook Graph API using the first code I posted. I am receiving the same response I would receive if I paste the URL in my browser.
I will also try using the Twitter4J API once again and check if I can get my task done. Will keep you updated.
So, here's the code using Twitter4J :
package Twitter;
import twitter4j.Twitter;
import twitter4j.TwitterException;
import twitter4j.TwitterFactory;
import twitter4j.User;
public class TwitterAPI {
/**
* #param args
*/
public static void main(String[] args) {
Twitter unauthenticatedTwitter = new TwitterFactory().getInstance();
try {
User user = unauthenticatedTwitter.showUser("denzil_correa");
System.out.println(user.getLocation());
} catch (TwitterException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
Pretty straightforward as expected using the API. However, here's the error I receive:
Jun 23, 2010 7:12:10 PM twitter4j.internal.logging.CommonsLoggingLogger info
INFO: Using class twitter4j.internal.logging.CommonsLoggingLoggerFactory as logging factory.
Jun 23, 2010 7:12:11 PM twitter4j.internal.logging.CommonsLoggingLogger info
INFO: Use twitter4j.internal.http.HttpClientImpl as HttpClient implementation.
TwitterException{statusCode=-1, retryAfter=0, rateLimitStatus=null}
at twitter4j.internal.http.HttpClientImpl.request(HttpClientImpl.java:316)
at twitter4j.internal.http.HttpClientWrapper.request(HttpClientWrapper.java:68)
at twitter4j.internal.http.HttpClientWrapper.get(HttpClientWrapper.java:90)
at twitter4j.Twitter.showUser(Twitter.java:538)
at Twitter.TwitterAPI.main(TwitterAPI.java:17)
Caused by: java.net.ConnectException: Connection refused: connect
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:365)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:227)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:214)
at java.net.Socket.connect(Socket.java:531)
at sun.net.NetworkClient.doConnect(NetworkClient.java:152)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:394)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:529)
at com.ibm.net.ssl.www2.protocol.https.c.(c.java:166)
at com.ibm.net.ssl.www2.protocol.https.c.a(c.java:9)
at com.ibm.net.ssl.www2.protocol.https.d.getNewHttpClient(d.java:55)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:724)
at com.ibm.net.ssl.www2.protocol.https.d.connect(d.java:20)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:972)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:385)
at com.ibm.net.ssl.www2.protocol.https.b.getResponseCode(b.java:52)
at twitter4j.internal.http.HttpResponseImpl.(HttpResponseImpl.java:42)
at twitter4j.internal.http.HttpClientImpl.request(HttpClientImpl.java:279)
... 4 more
Again, I see that the error is essentially the same. So, all options tried! I'm sure there's something I am missing here. It would be great if you could point out the same.
Hank, Unfortunately the same doesn't work in Python too :-(
Traceback (most recent call last):
File "", line 1, in
urllib.urlopen("http://api.twitter.com/1/users/show/denzil_correa.json").read()
File "C:\Python26\lib\urllib.py", line 86, in urlopen
return opener.open(url)
File "C:\Python26\lib\urllib.py", line 205, in open
return getattr(self, name)(url)
File "C:\Python26\lib\urllib.py", line 347, in open_http
errcode, errmsg, headers = h.getreply()
File "C:\Python26\lib\httplib.py", line 1060, in getreply
response = self._conn.getresponse()
File "C:\Python26\lib\httplib.py", line 986, in getresponse
response.begin()
File "C:\Python26\lib\httplib.py", line 391, in begin
version, status, reason = self._read_status()
File "C:\Python26\lib\httplib.py", line 349, in _read_status
line = self.fp.readline()
File "C:\Python26\lib\socket.py", line 397, in readline
data = recv(1)
IOError: [Errno socket error] [Errno 10054] An existing connection was forcibly closed by the remote host
As #splix mentioned in the comments, doing this using just java.net is… suboptimal. I've never yet encountered a situation where HttpClient wasn't a better option. Event better is his suggestion of twitter4j; unless you're trying to create an alternative, it's almost always better to use an API wrapper like that vs. handling the raw HTTP interactions yourself.
UPDATE:
#Denzil it's odd that you're getting this same error even with twitter4j (I can't test the code until I get some free time to grab the lib, etc.) so I begin to suspect a problem on Twitter's end. If you have Python installed, try the following:
>>> import urllib
>>> urllib.urlopen("http://api.twitter.com/1/users/show/denzil_correa.json").read()
This worked for me.
UPDATE 2:
This definitely sounds like Twitter is intentionally refusing your requests. Possible reasons could include: your IP is on their blacklist for some reason, proxy voodoo, or things I haven't thought of. To elaborate on the proxy voodoo: I don't know what exactly it's doing to your requests, but it's possible it's adding a header or something that the Twitter API doesn't like. I'd recommend contacting Twitter support (if there is such a thing for API problems) or posting to the mailing list.
BTW, here's a thread from the mailing list that mentions ways to see if you're blacklisted.