Looking at the quick start guide it gives the following code example:
CloseableHttpClient httpclient = HttpClients.createDefault();
HttpGet httpGet = new HttpGet("http://targethost/homepage");
CloseableHttpResponse response1 = httpclient.execute(httpGet);
// The underlying HTTP connection is still held by the response object
// to allow the response content to be streamed directly from the network socket.
// In order to ensure correct deallocation of system resources
// the user MUST call CloseableHttpResponse#close() from a finally clause.
// Please note that if response content is not fully consumed the underlying
// connection cannot be safely re-used and will be shut down and discarded
// by the connection manager.
try {
System.out.println(response1.getStatusLine());
HttpEntity entity1 = response1.getEntity();
// do something useful with the response body
// and ensure it is fully consumed
EntityUtils.consume(entity1);
} finally {
response1.close();
}
The two comments in the code above say that we must close the response object for
"correct deallocation of system resources"
and
"if response content is not fully consumed the underlying connection cannot be safely re-used and will be shut down and discarded by the connection manager".
Now Apache have very kindly implementend a CloseableHttpResponse for us, which means we can use a try-with-resources block. But the close method only closes the response object, why doesn't it also consume the entity?
Because it is hard to say at that point whether or not the caller intends to re-use the underlying connection. In some cases one may want to read just a small chunk from a large response body and immediately terminate the connection.
In other words, the same thing happens over and over again: there is no one way that can make everyone happy.
The code snippet will do ensure proper de-allocation of resources while trying to keep the underlying connection alive.
CloseableHttpClient httpclient = HttpClients.createDefault();
HttpGet httpGet = new HttpGet("http://targethost/homepage");
CloseableHttpResponse response1 = httpclient.execute(httpGet);
try {
System.out.println(response1.getStatusLine());
} finally {
EntityUtils.consume(response1.getEntity());
}
Related
This problem has blocked our whole team half a day!
We use apache httpclient 4.3.x to post and get data from an storage server which provides http api. In order to improve performance, we used PoolingHttpClientConnectionManager:
public HttpClient createHttpClient() {
Registry registry = RegistryBuilder.create()....build();
PoolingHttpClientConnectionManager connectionManager = new PoolingHttpClientConnectionManager(registry);
connectionManager.setMaxTotal(50);
connectionManager.setDefaultMaxPerRoute(50);
CloseableHttpClient httpClient = HttpClients.custom()
.setConnectionManager(connectionManager)
.build();
return httpClient;
}
Then we hold an instance of the httpClient in our program, reuse it with every http request:
Global httpClient:
HttpClient httpClient = createHttpClient();
Post some data:
HttpPost httpPut = new HttpPost("...");
HttpResponse response = httpClient.execute(httpPut);
// Notice we get the response content here!
String content = EntityUtils.toString(response.getEntity());
System.out.println(content);
httpPut.releaseConnection();
response.close();
Then get:
HttpGet httpGet = new HttpGet("...");
// Blocked at this line !!!!
HttpResponse response = httpClient.execute(httpGet);
String content = EntityUtils.toString(response.getEntity());
System.out.println(content);
httpPut.releaseConnection();
response.close();
Please notice the line: // Blocked at this line !!!!
The program has blocked at that line and never go to next line. In debugging mode, I can see it has been blocked at:
SocketInputStream.socketRead0()
I've searched for a lot of questions and documents, but no lucky.
My colleage just fix it by setting NoConnectionReuseStrategy.INSTANCE:
HttpClients.custom()
.setConnectionManager(connectionManager)
// Following line fixed the problem, but why?
.setConnectionReuseStrategy(NoConnectionReuseStrategy.INSTANCE)
.build();
Now it doens't blocked, but why?
What does "reuse connection" mean? And is there performance issue by using NoConnectionReuseStrategy?
Thank you, guys~
I tried to reproduce the blocking http-get (also as an exercise for myself) but even without closing responses I could not get it to block. The ONLY time I managed to make the http-get block is by doing a response.getEntity().getContent() without reading from the returned InputStream and without closing the returned InputStream.
For my tests I used Tomcat 7.0.47 with two very simple servlets (one responding "OK" to a get, the other echoing a post) as a server. The client started 50 threads with each thread performing 30 alternating http-get and http-post request (total of 1500 requests). The client did not use the RegistryBuilder, instead the default one is used (created by the PoolingHttpClientConnectionManager itself).
About the NoConnectionReuseStrategy: by default (HttpClient created with HttpClients.createDefault(), I used org.apache.httpcomponents:httpclient:4.3.1) a connection pool is used with a maximum of 2 connections to 1 server. E.g. even if 5 threads are doing all kinds of requests at the same time to 1 server, the connection pool opens only 2 connections, re-uses them for all requests and ensures that 1 connection is used by 1 thread at any given time. This can have a very positive impact on client performance and significantly reduces load on the server. The only thing you must make sure is to call response.close() in a finally-block (this ensures the connection is returned to the connection pool). By using the NoConnectionReuseStrategy you basically disable the connection pool: for each request a new connection will be created. I recommend you enable debug-logging for category org.apache.http.impl.conn.PoolingHttpClientConnectionManager, it is very informative.
A note about httpPut.releaseConnection(): this does not actually release a connection, it only ensures that you can re-use the "httpPut" object in a next request (see the apidocs, follow the shown link). Also note that in your code for the "httpGet", you call releaseConnection() on "httpPut" instead of "httpGet".
Ran into this problem just a while back. In case someone else comes across this problem, this post might be useful.
I am using a Java Servlet to service my requests. When I wrote to the response stream using the PrintWriter instance my client blocked. Tried writing to the OutputStream directlyresponse.getOutputStream.write("myresponse") and it worked.
I am developing one application which is connecting to server to get some data.
In this I want to check first if application is connected to server or not. And then, if server is on or off? Based on the result I want to do my further manipulations.
So how do I get the result of server status?
Here is the code which I am using:
Code:
try {
HttpClient httpclient = new DefaultHttpClient();
HttpPost httppost = new HttpPost(
"http://192.168.1.23/sip_chat_api/getcountry.php");
HttpResponse response = httpclient.execute(httppost);
HttpEntity entity = response.getEntity();
is = entity.getContent();
} catch (Exception e) {
}
Maintaining session cookies is best choice here, please see how to use session cookie: How do I make an http request using cookies on Android?
here, before sending request to server, check for session cookie. If it exists, proceed for the communication.
Update:
The Java equivalent -- which I believe works on Android -- should be:
InetAddress.getByName(host).isReachable(timeOut)
Check getStatusLine() method of HttpResponse
any status code other than 200 means there is a problem , and each status codes points to different problems happened.
http://hc.apache.org/httpcomponents-core-ga/httpcore/apidocs/org/apache/http/HttpResponse.html?is-external=true
http://hc.apache.org/httpcomponents-core-ga/httpcore/apidocs/org/apache/http/StatusLine.html#getStatusCode()
I'm currently using HttpURLConnection to stream live content such as a radio broadcast. However it seems that using HttpClient is a better option since it's well supported by Android and it's a better implementation. Also, there seems to be a logic for automatic reconnection from a lost connection.
My problem is that I can't get this to work. It's always hanging when calling httpclient.execute(...).
What am I doing wrong?
HttpClient httpclient = new DefaultHttpClient();
HttpGet httpget = new HttpGet("http://208.76.243.123:7100");
HttpResponse response = httpclient.execute(httpget);
HttpEntity entity = response.getEntity();
Run it in debugger and when it hangs, call break. Then find the thread that is executing your code and see in stack trace where exactly it blocked. You will see if it blocked on IO or something else is happening. With that data it will be easier to identify the problem.
Are you sure your server understands the HTTP protocol? (I assume yes, it sounds like you had a different client working). It is possible the execute method is blocking because it has not seen a valid Response header yet.
You probably want entity.getContent() which will return a handle to a stream. See this question.
I have code similar to the following:
try {
HttpPost post = new HttpPost(httpsUrl);
setHeaders(post);
HttpEntity entity = new StringEntity(request, "UTF-8");
post.setEntity(entity);
HttpResponse response = httpclient.execute(post);
String result = EntityReader.readContent(response.getEntity());
checkAnswer(result);
return result;
} catch (Exception e) {
throw new ZapException("Error executing the http post request: "+e.getMessage(), e);
}
It sends the content of request to a server via POST using a httpclient instance that might have already been used before (it has persistent connections turned on, since we're sending quite some requests to the same server...).
This sometimes fails with a SocketTimeoutException with "Read timed out" as the message.
It's not clear to us, why it only fails at some times, when most times it doesn't. What gives?
In the following, I assume you are using Apache Commons HttpClient (org.apache.commons.httpclient.HttpClient).
Maybe you get thrown a SocketTimeoutException simply because, occasionally, the host your HttpClient instance is communicating with takes too long to respond, triggering HttpClient's cancellation routine.
You can increase the connection timeout and the socket timeout with the following
HttpConnectionParams params = httpclient.getHttpConnectionManager().getParams();
params.setConnectionTimeout(20000);
params.setSoTimeout(15000);
Aditionally, if you still face timeouts despite increasing the timeout limits, it is a good practice to handle the SocketTimeoutException gracefully - for example by retrying the connection a second and third time.
I am trying to write a simple Http client application in Java and am a bit confused by the seemingly different ways to establish Http client connections, and efficiently reuse the objects.
Current I am using the following steps (I have left out exception handling for simplicity)
Iterator<URI> uriIterator = someURIs();
HttpClient client = new DefaultHttpClient();
while (uriIterator.hasNext()) {
URI uri = uriIterator.next();
HttpGet request = new HttpGet(uri);
HttpResponse response = client.execute(request);
HttpEntity entity = response.getEntity();
InputStream content = entity.getContent();
processStream (content );
content.close();
}
In regard to the code above, my questions is:
Assuming all URI's are pointing to the same host (but different resources on that host). What is the recommended way to use a single http connection for all requests?
And how do you close the connection after the last request?
--edit:
I am confused at why the above steps never use HttpURLConnection, I would assume client.execute() creates one, but since I never see it I am not sure how to close it or reuse it.
To make use of persistent connection efficiently, you need to use the pooled connection manager,
SchemeRegistry schemeRegistry = new SchemeRegistry();
schemeRegistry.register(
new Scheme("http", 80, PlainSocketFactory.getSocketFactory()));
ClientConnectionManager cm = new ThreadSafeClientConnManager(schemeRegistry);
HttpClient httpClient = new DefaultHttpClient(cm);
My biggest problem with HttpURLConnection is its support for persistent connection (keep-alive) is very buggy.