I'm connecting to a simple RSS feed using HTTPUrlConnection. It works perfectly. I'd like to add a timeout to the connection since I don't want my app hanging in the event of a bad connection or whatever. This is the code I use and the setConnectTimeout method doesn't have any effect whatsoever.
HttpURLConnection http = (HttpURLConnection) mURL.openConnection();
http.setConnectTimeout(15000); //timeout after 15 seconds
...
If it helps I'm developing on android.
You should try to set the read timeout as well (http.setReadTimeout()). Oftentimes, a web server will happily accept your connection, but it might be slow in actually responding to the request.
You probably either/both:
1) Don't read anything from connection
2) Don't catch & handle the exception properly
As mentioned here, use logic similar to this:
int TIMEOUT_VALUE = 1000;
try {
URL testUrl = new URL("http://google.com");
StringBuilder answer = new StringBuilder(100000);
long start = System.nanoTime();
URLConnection testConnection = testUrl.openConnection();
testConnection.setConnectTimeout(TIMEOUT_VALUE);
testConnection.setReadTimeout(TIMEOUT_VALUE);
BufferedReader in = new BufferedReader(new InputStreamReader(testConnection.getInputStream()));
String inputLine;
while ((inputLine = in.readLine()) != null) {
answer.append(inputLine);
answer.append("\n");
}
in.close();
long elapsed = System.nanoTime() - start;
System.out.println("Elapsed (ms): " + elapsed / 1000000);
System.out.println("Answer:");
System.out.println(answer);
} catch (SocketTimeoutException e) {
System.out.println("More than " + TIMEOUT_VALUE + " elapsed.");
}
I had a similar problem - because the HttpUrlConnection won't time out mid-download. For example, if you turn off wifi when download is going, mine continued to say it is downloading, stuck at the same percentage.
I found a solution, using a TimerTask, connected to a AsyncTask named DownloaderTask. Try:
class Timeout extends TimerTask {
private DownloaderTask _task;
public Timeout(DownloaderTask task) {
_task = task;
}
#Override
public void run() {
Log.w(TAG,"Timed out while downloading.");
_task.cancel(false);
}
};
Then in the actual download loop set a timer for timeout-error:
_outFile.createNewFile();
FileOutputStream file = new FileOutputStream(_outFile);
out = new BufferedOutputStream(file);
byte[] data = new byte[1024];
int count;
_timer = new Timer();
// Read in chunks, much more efficient than byte by byte, lower cpu usage.
while((count = in.read(data, 0, 1024)) != -1 && !isCancelled()) {
out.write(data,0,count);
downloaded+=count;
publishProgress((int) ((downloaded/ (float)contentLength)*100));
_timer.cancel();
_timer = new Timer();
_timer.schedule(new Timeout(this), 1000*20);
}
_timer.cancel();
out.flush();
If it times out, and won't download even 1K in 20 seconds, it cancels instead of appearing to be forever downloading.
I was facing the same issue. Setting the connectionTimeout and readTimeout does not seems to return the exception as expected, but take really. It took me while to check the URLConnection() Method and understand what is going on. In the documentation for setConnectTimeout there is a warning
"if the hostname resolves to multiple IP addresses, this client will try each. If connecting to each of these addresses fails, multiple timeouts will elapse before the connect attempt throws an exception."
This mean s if you have 10 ips resolved by your host your actual time out will "10*readTimeoutSet".
You can check ips for the the host name here
http.setConnectTimeout(15000);
http.setReadTimeout(15000);
It's caused by:
1. You are connected to wifi but you dont have internet connection.
2. You are connected to GSM data but your transfer is very poor.
In both cases you get a host exception after about 20seconds. In my opinion the best way to get correct is:
public boolean isOnline() {
final int TIMEOUT_MILLS = 3000;
final boolean[] online = {false};
ConnectivityManager cm = (ConnectivityManager) context.getSystemService(Context.CONNECTIVITY_SERVICE);
NetworkInfo netInfo = cm.getActiveNetworkInfo();
if (netInfo != null && netInfo.isConnected()) {
final long time = System.currentTimeMillis();
new Thread(new Runnable() {
#Override
public void run() {
try {
URL url = new URL("http://www.google.com");
HttpURLConnection urlc = (HttpURLConnection) url.openConnection();
urlc.setConnectTimeout(TIMEOUT_MILLS);
urlc.setReadTimeout(TIMEOUT_MILLS);
urlc.connect();
if (urlc.getResponseCode() == 200) {
online[0] = true;
}
} catch (IOException e) {
loger.add(Loger.ERROR, e.toString());
}
}
}).start();
while (((System.currentTimeMillis() - time) <= TIMEOUT_MILLS)) {
if ((System.currentTimeMillis() - time) >= TIMEOUT_MILLS) {
return online[0];
}
}
}
return online[0];
}
remember - use it in asynctask or service.
Its simple solution, you are starting new Thread with HttpUrlConnection (remember use start() not run()). Than in while loop you are waiting 3 sec for result. If nothing happend return false. This way let you avoid waiting for host exception and avoid problem with not working setConnectTimeout() when you dont have internet connection.
A zero value means infinit time out, which means connection must occure and normally zero is default:
connection.setConnectTimeout(0);
connection.setReadTimeout(0);
refer to here
Try to set the ConnectionTimeout before openning the connection.
Related
I try to implement a Java Proxy for Http (Https will be the extension after Http works). I found a lot of resources on the Internet and try to solve all problems on my own so far. But now I come to a point where I stuck.
My Proxoy does not load the full http websites. I get a lot of error messages with the socket is already closed. So I think I try to send something over a Socket that is closed.
My Problem is now. I can not see why it is like this. I think a lot over the problem but I can not find the mistake. From my side The Sockets only get closed when the server close the connection to my Proxy Server. This happen when I read a -1 on the input stream from the server.
I would be happy for any help :-)
greetings
Christoph
public class ProxyThread extends Thread {
Socket client_socket;
Socket server_socket;
boolean thread_var = true;
int buffersize = 32768;
ProxyThread(Socket s) {
client_socket = s;
}
public void run() {
System.out.println("Run Client Thread");
try {
// Read request
final byte[] request = new byte[4096];
byte[] response = new byte[4096];
final InputStream in_client = client_socket.getInputStream();
OutputStream out_client = client_socket.getOutputStream();
in_client.read(request);
System.out.println("---------------------- Request Info --------------------");
System.out.println(new String(request));
Connection conn = new Connection(new String(request));
System.out.println("---------------------- Connection Info --------------------");
System.out.println("Host: " + conn.host);
System.out.println("Port: " + conn.port);
System.out.println("URL: " + conn.URL);
System.out.println("Type: " + conn.type);
System.out.println("Keep-Alive:" + conn.keep_alive);
server_socket = new Socket(conn.URL, conn.port);
InputStream in_server = server_socket.getInputStream();
final OutputStream out_server = server_socket.getOutputStream();
out_server.write(request);
out_server.flush();
Thread t = new Thread() {
public void run() {
int bytes_read;
try {
while ((bytes_read = in_client.read(request)) != -1) {
out_server.write(request, 0, bytes_read);
out_server.flush();
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
};
t.start();
int bytes_read;
while ((bytes_read = in_server.read(response)) != -1) {
out_client.write(response, 0, bytes_read);
out_client.flush();
//System.out.println("---------------------- Respone Info --------------------");
//System.out.println(new String(response));
}
//System.out.println("EIGENTLICH FERTIG");
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
client_socket.close();
server_socket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
EDIT:
My HTTP Proxy now works. The Answer is pretty helpfull once you understand what is ryl going on. If you come hear to find a solution this questions may help you:
Does the client send a request only to one Website / Webserver? Means do we always have the same port / hostname?
The Loop from the answer is very usefull but think where to place it?
Last think: Thanks #EJP its working your reply was very usefull. It only tooks a time to understand it!
You are making all the usual mistakes, and a few more.
The entire request is not guaranteed to arrive in a single read. You can't assume more than a single byte has arrived. You have to loop.
You aren't checking for end of stream at this stage.
You need a good knowledge of RFC 2616 to implement HTTP, specifically the parts about Content-length and transfer encoding.
You cannot assume that the server will close the connection after sending the response.
Closing either the input or the output stream or a socket closes the socket. This is the reason for your SocketException: socket closed.
When you get to HTTPS you will need to look at the CONNECT verb.
Flushing a socket output stream does nothing, and flushing inside a loop is to be avoided,
I have the following code (Android 4):
private HttpURLConnection conn = null;
private synchronized String downloadUrl(String myurl) {
InputStream is = null;
BufferedReader _bufferReader = null;
try {
URL url_service = new URL(.....);
System.setProperty("http.keepAlive", "false");
System.setProperty("http.maxConnections", "5");
conn = (HttpURLConnection) url_service.openConnection();
conn.setReadTimeout(DataHandler.TIME_OUT);
conn.setConnectTimeout(DataHandler.TIME_OUT);
conn.setRequestMethod("POST");
conn.setDoInput(true);
conn.setDoOutput(true);
conn.setRequestProperty("connection", "close");
conn.setInstanceFollowRedirects(false);
conn.connect();
StringBuilder total = null;
if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {
is = conn.getInputStream();
_bufferReader = new BufferedReader(new InputStreamReader(is));
total = new StringBuilder();
String line;
while ((line = _bufferReader.readLine()) != null) {
total.append(line);
}
} else {
onDomainError();
}
return total.toString();
} catch (SocketTimeoutException ste) {
onDomainError();
} catch (Exception e) {
onDomainError();
} finally {
if (is != null) {
try {
is.close();
} catch (IOException e) {
// TODO Auto-generated catch block
}
}
if (_bufferReader != null) {
try {
_bufferReader.close();
} catch (Exception e) {
// TODO: handle exception
}
}
if (conn != null)
conn.disconnect();
conn = null;
}
return null;
}
.disconnect() is used, keep-alive is set to false and max connections is set to 5. However, if SocketTimeout exception occurs, connections are not closed and device soon gets out-of memory. How is this possible?
Also, according to http://developer.android.com/reference/java/net/HttpURLConnection.html, HttpURLConnection should close connections on disconnect() if keep-alive is set to false and reuse it when keep-alive is true. Neither of these approaches work for me. Any ideas what could be wrong?
One possibility is that you are not setting the properties soon enough. According to the javadoc, the "keepalive" property needs to be set to false before issuing any HTTP requests. And that might actually mean before the URL protocol drivers are initialized.
Another possibility is that your OOME is not caused by this at all. It could be caused by what your app does with the content it has downloaded.
There some other problems with your code too.
The variable names url_service, _bufferedReader and myurl are all violations of Java's identifier naming conventions.
The conn variable should be a local variable. Making it a field makes the downloadUrl method non-reentrant. (And that might be contributing to your problems ... if multiple threads are sharing one instance of this object!)
You don't need to close the buffered reader and the input stream. Just close the reader, and it will close the stream. This probably doesn't matter for a reader, but if you do that for a buffered writer AND you close the output stream first, you are liable to get exceptions.
UPDATE
So we definitely have lots of non-garbage HttpURLConnectionImpl instances, and we probably have multiple threads running this code via AsyncTask.
If you try to connect to a non-responding site (e.g. one where the TCP/IP connect requests are black-holing ...) then the conn.connect() call is going to block for a long time and eventually throw an exception. If the connect timeout is long enough, and your code is doing a potentially unbounded number of these calls in parallel, then you are liable to have lots of these instances.
If this theory is correct, then your problem is nothing to do with keep-alives and connections not being closed. The problem is at the other end ... connections that are never properly established in the first place clogging up memory, and each one tying up a thread / thread stack:
Try reducing the connect timeout.
Try running these requests using an Executor with a bounded thread pool.
Note what it says in the AsyncTask javadoc:
"AsyncTask is designed to be a helper class around Thread and Handler and does not constitute a generic threading framework. AsyncTasks should ideally be used for short operations (a few seconds at the most.) If you need to keep threads running for long periods of time, it is highly recommended you use the various APIs provided by the java.util.concurrent pacakge such as Executor, ThreadPoolExecutor and FutureTask."
I am writing a Java program to compute the http connection time for (lets say) 5 http connection (to different IP).
The first scenario is, without threading, the program connect and testing the http server one by one which mean when finish one server testing then proceed to another. In this scenario, the time taken is very long. Moreover, the timeout is not working properly, for example, I have set the
setConnectTimeout(5 * 1000);
setReadTimeout(5 * 1000);
but the time return by
long starTime = System.currentTimeMillis();
c.connect();
String line;
BufferedReader in = new BufferedReader(new InputStreamReader(uc.getInputStream()));
while ((line = in.readLine()) != null){
page.append(line);
elapseTime = System.currentTimeMillis() - starTime;
can be more than 5 second, some even go up to 30 second (but I set 5 second as timeout only).
So, I make the implementation to be multithreading. But the result is more rediculous. I can't even get one successful connection now.
Now my question is, can we establish multiple connection by using multiple thread? If answer is yes, what I have to notice to avoid the issue above?
Thank.
*Extra info*
1) I am computing the proxy connection speed, so, ya, the connection is proxy connection.
2) The threads that I created is around 100. I think it should be fine right?
How are you setting up your connections? Are you using a socket connection? If so, depending on how you setup your socket, you may find that the connection timeout value may be ignored
Socket sock = new Socket("hostname", port);
sock.setSoTimeout(5000);
sock.connect();
Will actually not set the connect timeout value, as the constructor will already attempt to connect.
SocketAddress sockaddr = new InetSocketAddress(host, port);
Socket sock = new Socket();
sock.connect(sockaddr, 5000);
Will more accurately connect with a timeout value. This may explain why your socket timeouts are not working.
public float getConnectionTime(){
long elapseTime = 0;
Proxy proxy = new Proxy(Proxy.Type.HTTP, new InetSocketAddress(ipAdd, portNum));
URL url;
StringBuilder page = new StringBuilder();
HttpURLConnection uc = null;
try {
uc = (HttpURLConnection)Main.targetMachine.openConnection(proxy);
// uc = (HttpURLConnection)Main.targetMachine.openConnection();
uc.setConnectTimeout(Main.timeOut);
uc.setReadTimeout(Main.timeOut);
long starTime = System.currentTimeMillis();
uc.connect();
// if (uc.getResponseCode() == HttpURLConnection.HTTP_OK){
// System.out.println("55555");
// }else System.out.println("88888");
String line;
BufferedReader in = new BufferedReader(new InputStreamReader(uc.getInputStream()));
while ((line = in.readLine()) != null){
page.append(line);
elapseTime = System.currentTimeMillis() - starTime;
}
} catch (SocketTimeoutException e) {
System.out.println("time out lo");
// e.printStackTrace();
return 9999; //if time out, use 9999 signal.
} catch (IOException e){
System.out.println("open connection error, connect error or inputstream error");
// e.printStackTrace();
return 9999;
}finally{
if (uc != null)
uc.disconnect();
}
// System.out.println(page);
return (float)elapseTime / 1000;
}
I need a monitor class that regularly checks whether a given HTTP URL is available. I can take care of the "regularly" part using the Spring TaskExecutor abstraction, so that's not the topic here. The question is: What is the preferred way to ping a URL in java?
Here is my current code as a starting point:
try {
final URLConnection connection = new URL(url).openConnection();
connection.connect();
LOG.info("Service " + url + " available, yeah!");
available = true;
} catch (final MalformedURLException e) {
throw new IllegalStateException("Bad URL: " + url, e);
} catch (final IOException e) {
LOG.info("Service " + url + " unavailable, oh no!", e);
available = false;
}
Is this any good at all (will it do what I want)?
Do I have to somehow close the connection?
I suppose this is a GET request. Is there a way to send HEAD instead?
Is this any good at all (will it do what I want?)
You can do so. Another feasible way is using java.net.Socket.
public static boolean pingHost(String host, int port, int timeout) {
try (Socket socket = new Socket()) {
socket.connect(new InetSocketAddress(host, port), timeout);
return true;
} catch (IOException e) {
return false; // Either timeout or unreachable or failed DNS lookup.
}
}
There's also the InetAddress#isReachable():
boolean reachable = InetAddress.getByName(hostname).isReachable();
This however doesn't explicitly test port 80. You risk to get false negatives due to a Firewall blocking other ports.
Do I have to somehow close the connection?
No, you don't explicitly need. It's handled and pooled under the hoods.
I suppose this is a GET request. Is there a way to send HEAD instead?
You can cast the obtained URLConnection to HttpURLConnection and then use setRequestMethod() to set the request method. However, you need to take into account that some poor webapps or homegrown servers may return HTTP 405 error for a HEAD (i.e. not available, not implemented, not allowed) while a GET works perfectly fine. Using GET is more reliable in case you intend to verify links/resources not domains/hosts.
Testing the server for availability is not enough in my case, I need to test the URL (the webapp may not be deployed)
Indeed, connecting a host only informs if the host is available, not if the content is available. It can as good happen that a webserver has started without problems, but the webapp failed to deploy during server's start. This will however usually not cause the entire server to go down. You can determine that by checking if the HTTP response code is 200.
HttpURLConnection connection = (HttpURLConnection) new URL(url).openConnection();
connection.setRequestMethod("HEAD");
int responseCode = connection.getResponseCode();
if (responseCode != 200) {
// Not OK.
}
// < 100 is undetermined.
// 1nn is informal (shouldn't happen on a GET/HEAD)
// 2nn is success
// 3nn is redirect
// 4nn is client error
// 5nn is server error
For more detail about response status codes see RFC 2616 section 10. Calling connect() is by the way not needed if you're determining the response data. It will implicitly connect.
For future reference, here's a complete example in flavor of an utility method, also taking account with timeouts:
/**
* Pings a HTTP URL. This effectively sends a HEAD request and returns <code>true</code> if the response code is in
* the 200-399 range.
* #param url The HTTP URL to be pinged.
* #param timeout The timeout in millis for both the connection timeout and the response read timeout. Note that
* the total timeout is effectively two times the given timeout.
* #return <code>true</code> if the given HTTP URL has returned response code 200-399 on a HEAD request within the
* given timeout, otherwise <code>false</code>.
*/
public static boolean pingURL(String url, int timeout) {
url = url.replaceFirst("^https", "http"); // Otherwise an exception may be thrown on invalid SSL certificates.
try {
HttpURLConnection connection = (HttpURLConnection) new URL(url).openConnection();
connection.setConnectTimeout(timeout);
connection.setReadTimeout(timeout);
connection.setRequestMethod("HEAD");
int responseCode = connection.getResponseCode();
return (200 <= responseCode && responseCode <= 399);
} catch (IOException exception) {
return false;
}
}
Instead of using URLConnection use HttpURLConnection by calling openConnection() on your URL object.
Then use getResponseCode() will give you the HTTP response once you've read from the connection.
here is code:
HttpURLConnection connection = null;
try {
URL u = new URL("http://www.google.com/");
connection = (HttpURLConnection) u.openConnection();
connection.setRequestMethod("HEAD");
int code = connection.getResponseCode();
System.out.println("" + code);
// You can determine on HTTP return code received. 200 is success.
} catch (MalformedURLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} finally {
if (connection != null) {
connection.disconnect();
}
}
Also check similar question How to check if a URL exists or returns 404 with Java?
Hope this helps.
You could also use HttpURLConnection, which allows you to set the request method (to HEAD for example). Here's an example that shows how to send a request, read the response, and disconnect.
The following code performs a HEAD request to check whether the website is available or not.
public static boolean isReachable(String targetUrl) throws IOException
{
HttpURLConnection httpUrlConnection = (HttpURLConnection) new URL(
targetUrl).openConnection();
httpUrlConnection.setRequestMethod("HEAD");
try
{
int responseCode = httpUrlConnection.getResponseCode();
return responseCode == HttpURLConnection.HTTP_OK;
} catch (UnknownHostException noInternetConnection)
{
return false;
}
}
public boolean isOnline() {
Runtime runtime = Runtime.getRuntime();
try {
Process ipProcess = runtime.exec("/system/bin/ping -c 1 8.8.8.8");
int exitValue = ipProcess.waitFor();
return (exitValue == 0);
} catch (IOException | InterruptedException e) { e.printStackTrace(); }
return false;
}
Possible Questions
Is this really fast enough?Yes, very fast!
Couldn’t I just ping my own page, which I want
to request anyways? Sure! You could even check both, if you want to
differentiate between “internet connection available” and your own
servers beeing reachable What if the DNS is down? Google DNS (e.g.
8.8.8.8) is the largest public DNS service in the world. As of 2013 it serves 130 billion requests a day. Let ‘s just say, your app not
responding would probably not be the talk of the day.
read the link. its seems very good
EDIT:
in my exp of using it, it's not as fast as this method:
public boolean isOnline() {
NetworkInfo netInfo = connectivityManager.getActiveNetworkInfo();
return netInfo != null && netInfo.isConnectedOrConnecting();
}
they are a bit different but in the functionality for just checking the connection to internet the first method may become slow due to the connection variables.
Consider using the Restlet framework, which has great semantics for this sort of thing. It's powerful and flexible.
The code could be as simple as:
Client client = new Client(Protocol.HTTP);
Response response = client.get(url);
if (response.getStatus().isError()) {
// uh oh!
}
I seem to be running into a peculiar problem on Android 1.5 when a library I'm using (signpost 1.1-SNAPSHOT), makes two consecutive connections to a remote server. The second connection always fails with a HttpURLConnection.getResponseCode() of -1
Here's a testcase that exposes the problem:
// BROKEN
public void testDefaultOAuthConsumerAndroidBug() throws Exception {
for (int i = 0; i < 2; ++i) {
final HttpURLConnection c = (HttpURLConnection) new URL("https://api.tripit.com/oauth/request_token").openConnection();
final DefaultOAuthConsumer consumer = new DefaultOAuthConsumer(api_key, api_secret, SignatureMethod.HMAC_SHA1);
consumer.sign(c); // This line...
final InputStream is = c.getInputStream();
while( is.read() >= 0 ) ; // ... in combination with this line causes responseCode -1 for i==1 when using api.tripit.com but not mail.google.com
assertTrue(c.getResponseCode() > 0);
}
}
Basically, if I sign the request and then consume the entire input stream, the next request will fail with a resultcode of -1. The failure doesn't seem to happen if I just read one character from the input stream.
Note that this doesn't happen for any url -- just specific urls such as the one above.
Also, if I switch to using HttpClient instead of HttpURLConnection, everything works fine:
// WORKS
public void testCommonsHttpOAuthConsumerAndroidBug() throws Exception {
for (int i = 0; i < 2; ++i) {
final HttpGet c = new HttpGet("https://api.tripit.com/oauth/request_token");
final CommonsHttpOAuthConsumer consumer = new CommonsHttpOAuthConsumer(api_key, api_secret, SignatureMethod.HMAC_SHA1);
consumer.sign(c);
final HttpResponse response = new DefaultHttpClient().execute(c);
final InputStream is = response.getEntity().getContent();
while( is.read() >= 0 ) ;
assertTrue( response.getStatusLine().getStatusCode() == 200);
}
}
I've found references to what seems to be a similar problem elsewhere, but so far no solutions. If they're truly the same problem, then the problem probably isn't with signpost since the other references make no reference to it.
Any ideas?
Try set this property to see if it helps,
http.keepAlive=false
I saw similar problems when server response is not understood by UrlConnection and client/server gets out of sync.
If this solves your problem, you have to get a HTTP trace to see exactly what's special about the response.
EDIT: This change just confirms my suspicion. It doesn't solve your problem. It just hides the symptom.
If the response from first request is 200, we need a trace. I normally use Ethereal/Wireshark to get the TCP trace.
If your first response is not 200, I do see a problem in your code. With OAuth, the error response (401) actually returns data, which includes ProblemAdvice, Signature Base String etc to help you debug. You need to read everything from error stream. Otherwise, it's going to confuse next connection and that's the cause of -1. Following example shows you how to handle errors correctly,
public static String get(String url) throws IOException {
ByteArrayOutputStream os = new ByteArrayOutputStream();
URLConnection conn=null;
byte[] buf = new byte[4096];
try {
URL a = new URL(url);
conn = a.openConnection();
InputStream is = conn.getInputStream();
int ret = 0;
while ((ret = is.read(buf)) > 0) {
os.write(buf, 0, ret);
}
// close the inputstream
is.close();
return new String(os.toByteArray());
} catch (IOException e) {
try {
int respCode = ((HttpURLConnection)conn).getResponseCode();
InputStream es = ((HttpURLConnection)conn).getErrorStream();
int ret = 0;
// read the response body
while ((ret = es.read(buf)) > 0) {
os.write(buf, 0, ret);
}
// close the errorstream
es.close();
return "Error response " + respCode + ": " +
new String(os.toByteArray());
} catch(IOException ex) {
throw ex;
}
}
}
I've encountered the same problem when I did not read in all the data from the InputStream before closing it and opening a second connection. It was also fixed either with System.setProperty("http.keepAlive", "false"); or simply just looping until I've read the rest of the InputStream.
Not completely related to your issue, but hope this helps anyone else with a similar problem.
Google provided an elegant workaround since it's only happening prior to Froyo:
private void disableConnectionReuseIfNecessary() {
// HTTP connection reuse which was buggy pre-froyo
if (Integer.parseInt(Build.VERSION.SDK) < Build.VERSION_CODES.FROYO) {
System.setProperty("http.keepAlive", "false");
}
}
Cf. http://android-developers.blogspot.ca/2011/09/androids-http-clients.html
Or, you can set HTTP header in the connection (HttpUrlConnection):
conn.setRequestProperty("Connection", "close");
Can you verify that the connection is not getting closed before you finish reading the response? Maybe HttpClient parses the response code right away, and saves it for future queries, however HttpURLConnection could be returning -1 once the connection is closed?