I have the following code (Android 4):
private HttpURLConnection conn = null;
private synchronized String downloadUrl(String myurl) {
InputStream is = null;
BufferedReader _bufferReader = null;
try {
URL url_service = new URL(.....);
System.setProperty("http.keepAlive", "false");
System.setProperty("http.maxConnections", "5");
conn = (HttpURLConnection) url_service.openConnection();
conn.setReadTimeout(DataHandler.TIME_OUT);
conn.setConnectTimeout(DataHandler.TIME_OUT);
conn.setRequestMethod("POST");
conn.setDoInput(true);
conn.setDoOutput(true);
conn.setRequestProperty("connection", "close");
conn.setInstanceFollowRedirects(false);
conn.connect();
StringBuilder total = null;
if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {
is = conn.getInputStream();
_bufferReader = new BufferedReader(new InputStreamReader(is));
total = new StringBuilder();
String line;
while ((line = _bufferReader.readLine()) != null) {
total.append(line);
}
} else {
onDomainError();
}
return total.toString();
} catch (SocketTimeoutException ste) {
onDomainError();
} catch (Exception e) {
onDomainError();
} finally {
if (is != null) {
try {
is.close();
} catch (IOException e) {
// TODO Auto-generated catch block
}
}
if (_bufferReader != null) {
try {
_bufferReader.close();
} catch (Exception e) {
// TODO: handle exception
}
}
if (conn != null)
conn.disconnect();
conn = null;
}
return null;
}
.disconnect() is used, keep-alive is set to false and max connections is set to 5. However, if SocketTimeout exception occurs, connections are not closed and device soon gets out-of memory. How is this possible?
Also, according to http://developer.android.com/reference/java/net/HttpURLConnection.html, HttpURLConnection should close connections on disconnect() if keep-alive is set to false and reuse it when keep-alive is true. Neither of these approaches work for me. Any ideas what could be wrong?
One possibility is that you are not setting the properties soon enough. According to the javadoc, the "keepalive" property needs to be set to false before issuing any HTTP requests. And that might actually mean before the URL protocol drivers are initialized.
Another possibility is that your OOME is not caused by this at all. It could be caused by what your app does with the content it has downloaded.
There some other problems with your code too.
The variable names url_service, _bufferedReader and myurl are all violations of Java's identifier naming conventions.
The conn variable should be a local variable. Making it a field makes the downloadUrl method non-reentrant. (And that might be contributing to your problems ... if multiple threads are sharing one instance of this object!)
You don't need to close the buffered reader and the input stream. Just close the reader, and it will close the stream. This probably doesn't matter for a reader, but if you do that for a buffered writer AND you close the output stream first, you are liable to get exceptions.
UPDATE
So we definitely have lots of non-garbage HttpURLConnectionImpl instances, and we probably have multiple threads running this code via AsyncTask.
If you try to connect to a non-responding site (e.g. one where the TCP/IP connect requests are black-holing ...) then the conn.connect() call is going to block for a long time and eventually throw an exception. If the connect timeout is long enough, and your code is doing a potentially unbounded number of these calls in parallel, then you are liable to have lots of these instances.
If this theory is correct, then your problem is nothing to do with keep-alives and connections not being closed. The problem is at the other end ... connections that are never properly established in the first place clogging up memory, and each one tying up a thread / thread stack:
Try reducing the connect timeout.
Try running these requests using an Executor with a bounded thread pool.
Note what it says in the AsyncTask javadoc:
"AsyncTask is designed to be a helper class around Thread and Handler and does not constitute a generic threading framework. AsyncTasks should ideally be used for short operations (a few seconds at the most.) If you need to keep threads running for long periods of time, it is highly recommended you use the various APIs provided by the java.util.concurrent pacakge such as Executor, ThreadPoolExecutor and FutureTask."
Related
I have observed that one of my api is taking much more time if called through Java (URLConnection or Apache Http Client or OKHttp) for the first time. For the subsequent calls, the time is much lesser.
Although Postman or curl.exe takes very less time(comparable to the second iterations of java)
For my machine, the first time overhead is around 2 secs. But on some machines this is rising to around 5-6 secs for the first time. Thereafter it is around 300 ms roundtrip.
Here is my sample code:
public static String DoPostUsingURLConnection(String s_uri) throws Exception {
try {
URL uri = new URL(s_uri);
HttpURLConnection connection = (HttpURLConnection) uri.openConnection();
// Logger.log("Opened Connection");
connection.setRequestMethod("POST");
connection.setRequestProperty("Content-Type", "application/json");
connection.setDoOutput(true);
connection.setRequestProperty("Authorization", authorizationHeader);
// Create the Request Body
try (OutputStream os = connection.getOutputStream()) {
byte[] input = jsonRequestBody.getBytes("utf-8");
os.write(input, 0, input.length);
}
// Logger.log("Written Output Stream");
int responseCode = connection.getResponseCode();
InputStream is = null;
if (responseCode == HttpURLConnection.HTTP_OK)
is = connection.getInputStream();
else
is = connection.getErrorStream();
BufferedReader in = new BufferedReader(new InputStreamReader(is));
String inputLine;
StringBuffer response = new StringBuffer();
while ((inputLine = in.readLine()) != null) {
response.append(inputLine).append("\n");
;
}
in.close();
return response.toString();
} catch (Exception ex) {
return ex.getMessage();
} finally {
// Logger.log("Got full response");
}
You can investigate where time is taken by logging OkHttp connections events.
https://square.github.io/okhttp/events/
It will be particularly relevant if you are getting an IPv4 address and IPv6 and one is timing out and the other one succeeding.
This is just a guess. But the way Http connection works, that when you invoke it for the first time the connection gets established and that takes time. After that Http protocol doesn't really close connection for some time in expectation that some more requests would come and the connection could be re-used. And in your case indeed you send subsequent requests that re-use the previously created connection rather then re-establishing it which is expansive. I have written my own Open Source library that has a simplistic Http client in it. I noticed the same effect that first request takes much longer time than subsequent requests. But that doesn't explain why in Postman and curl we don't see the same effect. Anyway, if you want to solve this problem and you know your URL in advance, send a request upon your app initialization (you can even do it in separate thread). That will solve your problem.
If you are interested to look at my library here is Javadoc link. You can find it as maven artifact here and on github here. Article about the library covering partial list of features here
What is the reason for too many tcp close wait state in my server and how could i resolve this?
This is the sample snippet which my java client invokes to connect with server
HttpURLConnection urlConnection = (HttpURLConnection) (new URL(serverUrl).openConnection());
urlConnection.setDoOutput(true);
urlConnection.setDoInput(true);
urlConnection.setRequestMethod("POST");
urlConnection.setConnectTimeout(5000);
urlConnection.setReadTimeout(60000);
os = urlConnection.getOutputStream();
//Write to output stream
os.flush();
os.close();
urlConnection.connect();
is = urlConnection.getInputStream();
StringBuilder sb = new StringBuilder();
br = new BufferedReader(new InputStreamReader(is));
String eachLine = br.readLine();
while (eachLine != null && "".equals(eachLine) == false) {
sb.append(eachLine);
eachLine = br.readLine();
}
br.close();
is.close();
return sb.toString();
} catch (SocketTimeoutException se) {
System.out.println("Socket time out exception ");
} catch (Exception ioException) {
System.out.println("IO Exception ");
} finally {
try {
if (br != null) br.close();
} catch (IOException ioe) {
ioe.toString();
}
try {
if (is != null) is.close();
} catch (IOException ioe) {
ioe.toString();
}
try {
if (os != null) os.close();
} catch (IOException ioe) {
ioe.toString();
}
}
The following article suggests about keep-alive time and I could associate this with my client code which tries to connect it with the server.
The client can read error stream completely when exception occurred so that the underlying tcp connection could be reused.
When could I get too many tcp close_wait state and how can I avoid this?
What is the reason for too many tcp close wait state in my server and how could i resolve this?
Your server is leaking sockets. It is failing to detect client disconnects, or ignoring them, and not closing the socket.
The link you cite is irrelevant.
The cause is that your server code is not actively closing client connections by calling close(), leaving sockets in a state known as "half-closed".
To fix this issue, your server should detect when the connection was closed by the remote host and close the connection appropriately. If you fail to do this connections will stay in the CLOSE_WAIT state until the process itself is terminated and the OS closes all existing connections.
TCP connections actually consist of two half-connections which can be closed independently of each other. One end (like A in diagram below) can call close() on the socket, signaling that it will not send any more data, but the other end (like B in diagram below) may just ACK and continue sending data to A.
(A calls close())
A -----FIN-----> B
FIN_WAIT_1 CLOSE_WAIT
A <----ACK------ B
FIN_WAIT_2
(B can send more data here, this is half-close state)
(B calls close())
A <----FIN------ B
TIME_WAIT LAST_ACK
A -----ACK-----> B
| CLOSED
2MSL Timer
|
CLOSED
I have a list of 100k users. I have to loop through the list and make an API call to the server to get the result. Every time I create a new URL connections and making the APi call then closing the connection once I read the input stream, but it is taking too much time.
Is there any optimized way to do it, like using the same instance of URL connection multiple times instead of closing it? or going for another third-party library will improve the speed of execution?
I am calling the below method in my loop to get the output.
private String getOutput(String loginName) {
String responseStatus = null;
HttpURLConnection connection = null;
try {
URL url= new URL(<<https://api.junk.123.com/output>>);
connection = (HttpURLConnection) url.openConnection();
connection.setRequestMethod("POST");
connection.setRequestProperty("apikey", "authentication key");
connection.setUseCaches(false);
connection.setDoOutput(true);
//Send request
try(DataOutputStream outputStream = new DataOutputStream(connection.getOutputStream())){
JsonObject jsonParam = new JsonObject();
jsonParam.putString("loginName", "loginName");
outputStream.writeBytes(jsonParam.toString());
outputStream.flush();
}
//Get response
InputStream inputStream;
if(connection.getResponseCode() == HttpURLConnection.HTTP_OK){
inputStream = connection.getInputStream();
} else {
inputStream = connection.getErrorStream();
}
if(null == inputStream){
return String.valueOf(connection.getResponseCode());
}
StringBuilder response = new StringBuilder();
try (BufferedReader inputBuffer = new BufferedReader(new InputStreamReader(inputStream))) {
String line;
while (null != (line = inputBuffer.readLine())) {
response.append(line);
response.append("\r");
}
}
JsonObject jsonObject = new JsonObject(response.toString());
if (connection.getResponseCode() == HttpURLConnection.HTTP_OK) {
responseStatus = "success";
} else {
responseStatus = String.valueOf(connection.getResponseCode()) + jsonObject.getString("errorMessage") ;
}
} catch (MalformedURLException e) {
logger.error("Malformed URL exception occurred while calling the API", entry.getKey(), e);
} catch (IOException e) {
logger.error("Creation of connection failed while calling the API", entry.getKey(), e);
} catch (Exception e) {
logger.error("Error occurred while calling the API", entry.getKey(), e);
} finally {
if (null != connection){
connection.disconnect();
}
}
return responseStatus;
}
This Q&A explains that HTTP persistent connections are implemented behind the scenes by HttpURLConnection:
Persistent HttpURLConnection in Java
However, that may not be sufficient. If you use a single client-side thread to do the fetching you are limited by the round trip time for the requests; i.e. you can't start a second request until the result of the first one has been returned to you. You can remedy this ... up to a point ... by using multiple client-side threads.
However (#2) sending multiple requests in parallel also has its limits. Beyond a certain point you will saturate the client, the server or the network. In addition, some servers have throttling mechanisms to cap the number of requests that a client can make.
The way to get maximum throughput would be to redesign the API so that a single request can get information for multiple users.
everyone. I'm coding a function that connects to a server by using Class HttpURLConnection. In the code, I establish a connection, call getOutputStream() and getInputStream() methods in order. Then I disconnect the connection. After this, I try to get data which has been obtained by getInputStream() method, but the compiler reminds NullPointerException.
Code in below:
DataOutputStream out = null;
InputStreamReader inStrReader = null;
BufferedReader reader = null;
HttpURLConnection connection = null;
try {
URL postUrl = new URL(null, url, new sun.net.www.protocol.https.Handler());
connection = (HttpURLConnection) postUrl.openConnection();
...//some setting methods
connection.connect();
out = new DataOutputStream(connection.getOutputStream());
out.writeBytes(JSONObject.toJSONString(param));
out.flush();
out.close();
inStrReader = new InputStreamReader(connection.getInputStream(), "utf-8");
reader = new BufferedReader(inStrReader);
connection.disconnect(); //<--HERE, release the connection
StringBuilder stringBuilder = new StringBuilder();
for (String line = reader.readLine(); line != null; line = reader.readLine()) { //<--null pointer
stringBuilder.append(line);
}
} catch (Exception e) {
e.printStackTrace();
return null;
} finally {
if (out != null) {
try {
out.close();
} catch (IOException e) {
e.printStackTrace();
}
}
if (inStrReader != null) {
try {
inStrReader.close();
} catch (IOException e) {
e.printStackTrace();
}
}
if (reader != null) {
try {
reader.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
After debug attempts, When I move the disconnection line to the last line in finally module, everything will be ok. But I'm confused, which happens when I already assgined the 'inputstream' value to 'reader'.
Thanks a lot.
Assigning isn't equal to reading, reader.readLine() start read from connection.
InputStreamReader is using the connection to read bytes, you disconnect before it read the bytes using the connection
An InputStreamReader is a bridge from byte streams to character
streams: It reads bytes and ...
Remember it is an "stream". You need to have an active connection to read from stream. Close the connection only after you have retrieved your data from stream.
You're doing everything in the wrong order. It doesn't make sense.
You're disconnecting and then expecting to be able to read from the connection. Total nonsense here. Normally you shouldn't disconnect at all, as you interfere with HTTP connection pooling. Just remove it, or, if you must have it, do it after all the closes.
You're closing in the wrong order, but you don't need to close inStrReader at all. Closing the BufferedReader does that. Just remove all the code for inStrReader.close().
You're closing out twice. Don't do that.
connect() happens implicitly. You don't need to call it yourself.
new URL(url) is sufficient. You haven't needed to provide the HTTPS Handler since about 2003.
I need a monitor class that regularly checks whether a given HTTP URL is available. I can take care of the "regularly" part using the Spring TaskExecutor abstraction, so that's not the topic here. The question is: What is the preferred way to ping a URL in java?
Here is my current code as a starting point:
try {
final URLConnection connection = new URL(url).openConnection();
connection.connect();
LOG.info("Service " + url + " available, yeah!");
available = true;
} catch (final MalformedURLException e) {
throw new IllegalStateException("Bad URL: " + url, e);
} catch (final IOException e) {
LOG.info("Service " + url + " unavailable, oh no!", e);
available = false;
}
Is this any good at all (will it do what I want)?
Do I have to somehow close the connection?
I suppose this is a GET request. Is there a way to send HEAD instead?
Is this any good at all (will it do what I want?)
You can do so. Another feasible way is using java.net.Socket.
public static boolean pingHost(String host, int port, int timeout) {
try (Socket socket = new Socket()) {
socket.connect(new InetSocketAddress(host, port), timeout);
return true;
} catch (IOException e) {
return false; // Either timeout or unreachable or failed DNS lookup.
}
}
There's also the InetAddress#isReachable():
boolean reachable = InetAddress.getByName(hostname).isReachable();
This however doesn't explicitly test port 80. You risk to get false negatives due to a Firewall blocking other ports.
Do I have to somehow close the connection?
No, you don't explicitly need. It's handled and pooled under the hoods.
I suppose this is a GET request. Is there a way to send HEAD instead?
You can cast the obtained URLConnection to HttpURLConnection and then use setRequestMethod() to set the request method. However, you need to take into account that some poor webapps or homegrown servers may return HTTP 405 error for a HEAD (i.e. not available, not implemented, not allowed) while a GET works perfectly fine. Using GET is more reliable in case you intend to verify links/resources not domains/hosts.
Testing the server for availability is not enough in my case, I need to test the URL (the webapp may not be deployed)
Indeed, connecting a host only informs if the host is available, not if the content is available. It can as good happen that a webserver has started without problems, but the webapp failed to deploy during server's start. This will however usually not cause the entire server to go down. You can determine that by checking if the HTTP response code is 200.
HttpURLConnection connection = (HttpURLConnection) new URL(url).openConnection();
connection.setRequestMethod("HEAD");
int responseCode = connection.getResponseCode();
if (responseCode != 200) {
// Not OK.
}
// < 100 is undetermined.
// 1nn is informal (shouldn't happen on a GET/HEAD)
// 2nn is success
// 3nn is redirect
// 4nn is client error
// 5nn is server error
For more detail about response status codes see RFC 2616 section 10. Calling connect() is by the way not needed if you're determining the response data. It will implicitly connect.
For future reference, here's a complete example in flavor of an utility method, also taking account with timeouts:
/**
* Pings a HTTP URL. This effectively sends a HEAD request and returns <code>true</code> if the response code is in
* the 200-399 range.
* #param url The HTTP URL to be pinged.
* #param timeout The timeout in millis for both the connection timeout and the response read timeout. Note that
* the total timeout is effectively two times the given timeout.
* #return <code>true</code> if the given HTTP URL has returned response code 200-399 on a HEAD request within the
* given timeout, otherwise <code>false</code>.
*/
public static boolean pingURL(String url, int timeout) {
url = url.replaceFirst("^https", "http"); // Otherwise an exception may be thrown on invalid SSL certificates.
try {
HttpURLConnection connection = (HttpURLConnection) new URL(url).openConnection();
connection.setConnectTimeout(timeout);
connection.setReadTimeout(timeout);
connection.setRequestMethod("HEAD");
int responseCode = connection.getResponseCode();
return (200 <= responseCode && responseCode <= 399);
} catch (IOException exception) {
return false;
}
}
Instead of using URLConnection use HttpURLConnection by calling openConnection() on your URL object.
Then use getResponseCode() will give you the HTTP response once you've read from the connection.
here is code:
HttpURLConnection connection = null;
try {
URL u = new URL("http://www.google.com/");
connection = (HttpURLConnection) u.openConnection();
connection.setRequestMethod("HEAD");
int code = connection.getResponseCode();
System.out.println("" + code);
// You can determine on HTTP return code received. 200 is success.
} catch (MalformedURLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} finally {
if (connection != null) {
connection.disconnect();
}
}
Also check similar question How to check if a URL exists or returns 404 with Java?
Hope this helps.
You could also use HttpURLConnection, which allows you to set the request method (to HEAD for example). Here's an example that shows how to send a request, read the response, and disconnect.
The following code performs a HEAD request to check whether the website is available or not.
public static boolean isReachable(String targetUrl) throws IOException
{
HttpURLConnection httpUrlConnection = (HttpURLConnection) new URL(
targetUrl).openConnection();
httpUrlConnection.setRequestMethod("HEAD");
try
{
int responseCode = httpUrlConnection.getResponseCode();
return responseCode == HttpURLConnection.HTTP_OK;
} catch (UnknownHostException noInternetConnection)
{
return false;
}
}
public boolean isOnline() {
Runtime runtime = Runtime.getRuntime();
try {
Process ipProcess = runtime.exec("/system/bin/ping -c 1 8.8.8.8");
int exitValue = ipProcess.waitFor();
return (exitValue == 0);
} catch (IOException | InterruptedException e) { e.printStackTrace(); }
return false;
}
Possible Questions
Is this really fast enough?Yes, very fast!
Couldn’t I just ping my own page, which I want
to request anyways? Sure! You could even check both, if you want to
differentiate between “internet connection available” and your own
servers beeing reachable What if the DNS is down? Google DNS (e.g.
8.8.8.8) is the largest public DNS service in the world. As of 2013 it serves 130 billion requests a day. Let ‘s just say, your app not
responding would probably not be the talk of the day.
read the link. its seems very good
EDIT:
in my exp of using it, it's not as fast as this method:
public boolean isOnline() {
NetworkInfo netInfo = connectivityManager.getActiveNetworkInfo();
return netInfo != null && netInfo.isConnectedOrConnecting();
}
they are a bit different but in the functionality for just checking the connection to internet the first method may become slow due to the connection variables.
Consider using the Restlet framework, which has great semantics for this sort of thing. It's powerful and flexible.
The code could be as simple as:
Client client = new Client(Protocol.HTTP);
Response response = client.get(url);
if (response.getStatus().isError()) {
// uh oh!
}