ThreadSafeClientConnManager is deprecated and a new method is introduced PoolingClientConnectionManager.
The documentation of PoolingClientConnectionManager says
Manages a pool of client connections and is able to service connection
requests from multiple execution threads. Connections are pooled on a
per route basis.
My Question
What is the meaning of per route basis here?
Put it in simple term, per route means per host you are connecting to.
PoolingHttpClientConnectionManager maintains a maximum limit of connections on a per route basis and in total. Per default this implementation will create no more than 2 concurrent connections per given route and no more 20 connections in total.
It refers to the HttpRoute. The HttpRoute is to delineate multiple applications running on the same web server.
http://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/conn/routing/HttpRoute.html
It is used like below:
ClientConnectionRequest connRequest = connMrg.requestConnection(
new HttpRoute(new HttpHost("localhost", 80)), null);
ManagedClientConnection conn = connRequest.getConnection(10, TimeUnit.SECONDS);
try {
BasicHttpRequest request = new BasicHttpRequest("GET", "/");
conn.sendRequestHeader(request);
HttpResponse response = conn.receiveResponseHeader();
conn.receiveResponseEntity(response);
HttpEntity entity = response.getEntity();
if (entity != null) {
BasicManagedEntity managedEntity = new BasicManagedEntity(entity, conn, true);
// Replace entity
response.setEntity(managedEntity);
}
// Do something useful with the response
// The connection will be released automatically
// as soon as the response content has been consumed
} catch (IOException ex) {
// Abort connection upon an I/O error.
conn.abortConnection();
throw ex;
}
source: http://hc.apache.org/httpcomponents-client-ga/tutorial/html/connmgmt.html
Related
I was using
TcpNetClientConnectionFactory cf = new TcpNetClientConnectionFactory(host, port);
for TcpOutboundGateway, normally TcpOutboundGateway is working req/reply order but in my case I extend TcpOutboundGateway to receive arbitrary messages with MessageChannel. This is why ı thought that i should use
cf.setLeaveOpen(true)
to keep connection open.
Although i started to use that option, after long time when i called tcp server again i have received
Exception like
org.springframework.integration.MessageTimeoutException: Timed out waiting for response
but i did not understand why i am taking this error because i set "true" to keep connection open in my connection factory.
THEN
I did some google and it was supposed to use CachingClientConnectionFactory, I understand that it is by default single-use=true and not supposed to change it false, but then i assume that connection will be open and close in my each request response transaction so is it obstacle to receive arbitrary data from server without any request from client ?
OR
How should i keep open connection between client and server ? should i use
cf.setSoKeepAlive(true) ?
to keep connection open ?
Are
cf.setSoKeepAlive(true) and cf.setLeaveOpen(true)
same with each other ?
EDIT
Also when i use cf.setSoKeepAlive(true), after 1 hour i got same exception too.
Full code :
private MessageChannel createNewSubflow(Message<?> message) {
String host = (String) message.getHeaders().get("host");
Integer port = (Integer) message.getHeaders().get("port");
boolean hasThisConnectionIrregularChannel = message.getHeaders().containsKey("irregularMessageChannelName");
Assert.state(host != null && port != null, "host and/or port header missing");
String flowRegisterKey;
if (hasThisConnectionIrregularChannel) {
flowRegisterKey = host + port + ".extended";
} else {
flowRegisterKey = host + port;
}
TcpNetClientConnectionFactory cf = new TcpNetClientConnectionFactory(host, port);
CachingClientConnectionFactory ccf = new CachingClientConnectionFactory(cf, 20);
ccf.setSoKeepAlive(true);
ByteArrayCrLfSerializer byteArrayCrLfSerializer = new ByteArrayCrLfSerializer();
byteArrayCrLfSerializer.setMaxMessageSize(1048576);
ccf.setSerializer(byteArrayCrLfSerializer);
ccf.setDeserializer(byteArrayCrLfSerializer);
TcpOutboundGateway tcpOutboundGateway;
if (hasThisConnectionIrregularChannel) {
String unsolicitedMessageChannelName = (String) message.getHeaders().get("irregularMessageChannelName");
DirectChannel directChannel = getBeanFactory().getBean(unsolicitedMessageChannelName, DirectChannel.class);
tcpOutboundGateway = new ExtendedTcpOutboundGateway(directChannel);
} else {
tcpOutboundGateway = new TcpOutboundGateway();
}
tcpOutboundGateway.setRemoteTimeout(20000);
tcpOutboundGateway.setConnectionFactory(ccf);
IntegrationFlow flow = f -> f.handle(tcpOutboundGateway);
IntegrationFlowContext.IntegrationFlowRegistration flowRegistration =
this.flowContext.registration(flow)
.addBean(ccf)
.id(flowRegisterKey + ".flow")
.register();
MessageChannel inputChannel = flowRegistration.getInputChannel();
this.subFlows.put(flowRegisterKey, inputChannel);
return inputChannel;
}
Why are you using the CachingClientConnectionFactory? It is not needed when you keep the connection open; it is intended to be used when you want to maintain multiple open connections.
Timed out waiting for response
Means the socket was open just fine (from the client's perspective) when you sent the request; we just didn't get a reply. This could mean that some network component (router) silently closed the socket due to inactivity. Keep-alives should help with that but it depends on your operating system and how often the TCP stack is configured to send keep alives.
We have been discussing with one of our data providers the issue that some of the requests from our HTTP requests are intermittently failing due to "Connection Reset" exceptions, but we have also seen "The target server failed to respond" exceptions too.
Many Stack Overflow posts point to some potential solutions, namely
It's a pooling configuration issue, try reaping
HttpClient version issue - suggesting downgrading to HttpClient 4.5.1 (often from 4.5.3) fixes it. I'm using 4.5.12 https://mvnrepository.com/artifact/org.apache.httpcomponents/httpclient
The target server is actually failing to process the request (or cloudfront before the origin server).
I'm hoping this question will help me get to the bottom of the root cause.
Context
It's a Java web application hosted in AWS Elastic Beanstalk with 2..4 servers based on load. The Java WAR file uses HttpClient 4.5.12 to communicate. Over the last few months we have seen
45 x Connection Reset (only 3 were timeouts over 30s, the others failed within 20ms)
To put this into context, we perform in the region of 10,000 requests to this supplier, so the error rate isn't excessive, but it is very inconvenient because our customers pay for the service that then subsequently fails.
Right now we are trying to focus on eliminating the "connection reset" scenarios and we have been recommended to try the following:
1) Restart our app servers (a desperate just-in-case scenario)
2) Change the DNS servers to use Google 8.8.8.8 & 8.8.4.4 (so our request take a different path)
3) Assign a static IP to each server (so they can enable us to communicate without going through their CloudFront distribution)
We will work through those suggestions, but at the same time I want to understand where our HttpClient implementation might not be quite right.
Typical usage
User Request --> Our server (JAX-RS request) --> HttpClient to 3rd party --> Response received e.g. JSON/XML --> Massaged response is sent back (Our JSON format)
Technical details
Tomcat 8 with Java 8 running on 64bit Amazon Linux
4.5.12 HttpClient
4.4.13 HttpCore <-- Maven dependencies shows HttpClient 4.5.12 requires 4.4.13
4.5.12 HttpMime
Typically a HTTP request will take anywhere between 200ms and 10 seconds, with timeouts set around 15-30s depending on the API we are invoking. I also use a connection pool and given that most requests should be complete within 30 seconds I felt it was safe to evict anything older than double that period.
Any advice on whether these are sensible values is appreciated.
// max 200 requests in the connection pool
CONNECTIONS_MAX = 200;
// each 3rd party API can only use up to 50, so worst case 4 APIs can be flooded before exhuasted
CONNECTIONS_MAX_PER_ROUTE = 50;
// as our timeouts are typically 30s I'm assuming it's safe to clean up connections
// that are double that
// Connection timeouts are 30s, wasn't sure whether to close 31s or wait 2xtypical = 60s
CONNECTION_CLOSE_IDLE_MS = 60000;
// If the connection hasn't been used for 60s then we aren't busy and we can remove from the connection pool
CONNECTION_EVICT_IDLE_MS = 60000;
// Is this per request or each packet, but all requests should finish within 30s
CONNECTION_TIME_TO_LIVE_MS = 60000;
// To ensure connections are validated if in the pool but hasn't been used for at least 500ms
CONNECTION_VALIDATE_AFTER_INACTIVITY_MS = 500; // WAS 30000 (not test 500ms yet)
Additionally we tend to set the three timeouts to 30s, but I'm sure we can fine-tune these...
// client tries to connect to the server. This denotes the time elapsed before the connection established or Server responded to connection request.
// The time to establish a connection with the remote host
.setConnectTimeout(...) // typical 30s - I guess this could be 5s (if we can't connect by then the remote server is stuffed/busy)
// Used when requesting a connection from the connection manager (pooling)
// The time to fetch a connection from the connection pool
.setConnectionRequestTimeout(...) // typical 30s - I guess only applicable if our pool is saturated, then this means how long to wait to get a connection?
// After establishing the connection, the client socket waits for response after sending the request.
// This is the time of inactivity to wait for packets to arrive
.setSocketTimeout(...) // typical 30s - I believe this is the main one that we care about, if we don't get our payload in 30s then give up
I have copy and pasted the main code we use for all GET/POST requests but stripped out the un-important aspects such as our retry logic, pre-cache and post-cache
We are using a single PoolingHttpClientConnectionManager with a single CloseableHttpClient, they're both configured as follows...
private static PoolingHttpClientConnectionManager createConnectionManager() {
PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
cm.setMaxTotal(CONNECTIONS_MAX); // 200
cm.setDefaultMaxPerRoute(CONNECTIONS_MAX_PER_ROUTE); // 50
cm.setValidateAfterInactivity(CONNECTION_VALIDATE_AFTER_INACTIVITY_MS); // Was 30000 now 500
return cm;
}
private static CloseableHttpClient createHttpClient() {
httpClient = HttpClientBuilder.create()
.setConnectionManager(cm)
.disableAutomaticRetries() // our code does the retries
.evictIdleConnections(CONNECTION_EVICT_IDLE_MS, TimeUnit.MILLISECONDS) // 60000
.setConnectionTimeToLive(CONNECTION_TIME_TO_LIVE_MS, TimeUnit.MILLISECONDS) // 60000
.setRedirectStrategy(LaxRedirectStrategy.INSTANCE)
// .setKeepAliveStrategy() - The default implementation looks solely at the 'Keep-Alive' header's timeout token.
.build();
return httpClient;
}
Every minute I have a thread that tries to reap connections
public static PoolStats performIdleConnectionReaper(Object source) {
synchronized (source) {
final PoolStats totalStats = cm.getTotalStats();
Log.info(source, "max:" + totalStats.getMax() + " avail:" + totalStats.getAvailable() + " leased:" + totalStats.getLeased() + " pending:" + totalStats.getPending());
cm.closeExpiredConnections();
cm.closeIdleConnections(CONNECTION_CLOSE_IDLE_MS, TimeUnit.MILLISECONDS); // 60000
return totalStats;
}
}
This is the custom method that performs all HttpClient GET/POST, it does stats, pre-cache, post-cache and other useful stuff, but I've stripped all of that out and this is the typical outline performed for each request. I've tried to follow the pattern as per the HttpClient docs that tell you to consume the entity and close the response. Note I don't close the httpClient because one instance is being used for all requests.
public static HttpHelperResponse execute(HttpHelperParams params) {
boolean abortRetries = false;
while (!abortRetries && ret.getAttempts() <= params.getMaxRetries()) {
// 1 Create HttpClient
// This is done once in the static init CloseableHttpClient httpClient = createHttpClient(params);
// 2 Create one of the methods, e.g. HttpGet / HttpPost - Note this also adds HTTP headers
// (see separate method below)
HttpRequestBase request = createRequest(params);
// 3 Tell HTTP Client to execute the command
CloseableHttpResponse response = null;
HttpEntity entity = null;
boolean alreadyStreamed = false;
try {
response = httpClient.execute(request);
if (response == null) {
throw new Exception("Null response received");
} else {
final StatusLine statusLine = response.getStatusLine();
ret.setStatusCode(statusLine.getStatusCode());
ret.setReasonPhrase(statusLine.getReasonPhrase());
if (ret.getStatusCode() == 429) {
try {
final int delay = (int) (Math.random() * params.getRetryDelayMs());
Thread.sleep(500 + delay); // minimum 500ms + random amount up to delay specified
} catch (Exception e) {
Log.error(false, params.getSource(), "HttpHelper Rate-limit sleep exception", e, params);
}
} else {
// 4 Read the response
// 6 Deal with the response
// do something useful with the response body
entity = response.getEntity();
if (entity == null) {
throw new Exception("Null entity received");
} else {
ret.setRawResponseAsString(EntityUtils.toString(entity, params.getEncoding()));
ret.setSuccess();
if (response.getAllHeaders() != null) {
for (Header header : response.getAllHeaders()) {
ret.addResponseHeader(header.getName(), header.getValue());
}
}
}
}
}
} catch (Exception ex) {
if (ret.getAttempts() >= params.getMaxRetries()) {
Log.error(false, params.getSource(), ex);
} else {
Log.warn(params.getSource(), ex.getMessage());
}
ret.setError(ex); // If we subsequently get a response then the error will be cleared.
} finally {
ret.incrementAttempts();
// Any HTTP 2xx are considered successfull, so stop retrying, or if
// a specifc HTTP code has been passed to stop retring
if (ret.getStatusCode() >= 200 && ret.getStatusCode() <= 299) {
abortRetries = true;
} else if (params.getDoNotRetryStatusCodes().contains(ret.getStatusCode())) {
abortRetries = true;
}
if (entity != null) {
try {
// and ensure it is fully consumed - hand it back to the pool
EntityUtils.consume(entity);
} catch (IOException ex) {
Log.error(false, params.getSource(), "HttpHelper Was unable to consume entity", params);
}
}
if (response != null) {
try {
// The underlying HTTP connection is still held by the response object
// to allow the response content to be streamed directly from the network socket.
// In order to ensure correct deallocation of system resources
// the user MUST call CloseableHttpResponse#close() from a finally clause.
// Please note that if response content is not fully consumed the underlying
// connection cannot be safely re-used and will be shut down and discarded
// by the connection manager.
response.close();
} catch (IOException ex) {
Log.error(false, params.getSource(), "HttpHelper Was unable to close a response", params);
}
}
// When using connection pooling we don't want to close the client, otherwise the connection
// pool will also be closed
// if (httpClient != null) {
// try {
// httpClient.close();
// } catch (IOException ex) {
// Log.error(false, params.getSource(), "HttpHelper Was unable to close httpClient", params);
// }
// }
}
}
return ret;
}
private static HttpRequestBase createRequest(HttpHelperParams params) {
...
request.setConfig(RequestConfig.copy(RequestConfig.DEFAULT)
// client tries to connect to the server. This denotes the time elapsed before the connection established or Server responded to connection request.
// The time to establish a connection with the remote host
.setConnectTimeout(...) // typical 30s
// Used when requesting a connection from the connection manager (pooling)
// The time to fetch a connection from the connection pool
.setConnectionRequestTimeout(...) // typical 30s
// After establishing the connection, the client socket waits for response after sending the request.
// This is the time of inactivity to wait for packets to arrive
.setSocketTimeout(...) // typical 30s
.build()
);
return request;
}
I have recently used httpclient 4.3, I know api has been changed, but if not setting timeout threshold(conenction or socket or conenctionmanager), it can work, which means no infinite loop query, and method.getResponseBodyAsString() would return an empty string, but in the document, it said that default parameter setting of timeout is infinite, so how does it work?
public class ContentModelUtils {
private static HttpClient client = new HttpClient();
...
public static String fetchPlainHttpResult(String id, Map<String, String> result, String getUrl)
throws HttpException, IOException {
method = new GetMethod(fetchPlainUrl(id, result, getUrl));
//client.getParams().setParameter("http.socket.timeout",1000);
//client.getParams().setParameter("http.connection.timeout",1000);
//client.getParams().setParameter("http.connection-manager.timeout",10000L);
client.executeMethod(method);
if (method.getStatusCode() != 200) {
return null;
}
String outputValue = new String(method.getResponseBodyAsString());
return outputValue;
}
...
The default setting is in fact an infinite timeout. To prove this, let's browse the source repository for Apache HttpCore 4.3.x.
In BasicConnFactory, we can see it pulling the connect timeout setting, and the line of code that retrieves the timeout parameter uses a default of 0.
this.connectTimeout = params.getIntParameter(CoreConnectionPNames.CONNECTION_TIMEOUT, 0);
Later, in BasicConnFactory#create, this timeout value is passed into a socket connection.
socket.connect(new InetSocketAddress(hostname, port), this.connectTimeout);
According to the documentation of Socket#connect, a timeout value of 0 (which we saw earlier is the default) is interpreted as an infinite timeout.
Connects this socket to the server with a specified timeout value. A timeout of zero is interpreted as an infinite timeout. The connection will then block until established or an error occurs.
I'm trying to port a network speed test from javascript to java running under Android. The way to speed test works is by hitting a CGI and requesting a given amount of data, and timing how long the data takes to transfer. This amount requested is changed dynamically to provide a relatively constant update rate.
But when I try to do this under Android, I see that the amount of time it takes for the response to come doesn't seem to be proportional to the amount of data requested. I am doing something like this:
final HttpParams params = new BasicHttpParams();
HttpClient httpclient = new DefaultHttpClient(ccm,params);
URI url;
try {
url = new URI("https://myserver.com/randomfile.php?pages=250");
} catch (Exception e) {
return -1;
}
HttpParams p = httpclient.getParams();
int timeout = 5000;
p.setIntParameter(CoreConnectionPNames.SO_TIMEOUT, timeout);
p.setIntParameter(CoreConnectionPNames.CONNECTION_TIMEOUT, timeout);
HttpGet request = new HttpGet();
request.setURI(url);
try {
long t0,t1,dt;
int rc;
t0 = java.lang.System.currentTimeMillis();
HttpResponse response = httpclient.execute(request);
t1 = java.lang.System.currentTimeMillis();
dt=t1-t0;
rc=response.getStatusLine().getStatusCode();
Log.d(logtag,"Get response code="+rc+",t="+dt);
} catch (Exception e) {
return -1;
}
From this, I'm guessing that maybe the call to httpclient.execute is returning as soon as it sees the response in the headers, but before all the data is transferred. I'm looking for the minimum amount of work required to know when the data is completely received. I don't care what the data is (I'm happy to just throw it away, it's just random bytes), and I don't want to waste extra time processing it if possible, to avoid skewing the reported transfer rate.
What's the minimum I need to do to accomplish this?
Also, it seems like there is some extra overhead in just setting up the call. For instance, if I try to read 4096 bytes of 1MB I am seeing about 500ms delay either way. I'm not sure where this extra delay is coming from; is there some way to get rid of it, because this is going to skew the results a lot more than a few milliseconds pulling data out of buffers.
It is possible to simply skip over the content returned using the skip() function of the InputStream obtained by the calling getContent():
InputStream instr;
t0 = java.lang.System.currentTimeMillis();
HttpResponse response = httpclient.execute(request);
rc=response.getStatusLine().getStatusCode();
instr=response.getEntity().getContent();
instr.skip(250*4096);
t1 = java.lang.System.currentTimeMillis();
As far as the call overhead, with a larger transfer it is a smaller part of the overall time.
I need a monitor class that regularly checks whether a given HTTP URL is available. I can take care of the "regularly" part using the Spring TaskExecutor abstraction, so that's not the topic here. The question is: What is the preferred way to ping a URL in java?
Here is my current code as a starting point:
try {
final URLConnection connection = new URL(url).openConnection();
connection.connect();
LOG.info("Service " + url + " available, yeah!");
available = true;
} catch (final MalformedURLException e) {
throw new IllegalStateException("Bad URL: " + url, e);
} catch (final IOException e) {
LOG.info("Service " + url + " unavailable, oh no!", e);
available = false;
}
Is this any good at all (will it do what I want)?
Do I have to somehow close the connection?
I suppose this is a GET request. Is there a way to send HEAD instead?
Is this any good at all (will it do what I want?)
You can do so. Another feasible way is using java.net.Socket.
public static boolean pingHost(String host, int port, int timeout) {
try (Socket socket = new Socket()) {
socket.connect(new InetSocketAddress(host, port), timeout);
return true;
} catch (IOException e) {
return false; // Either timeout or unreachable or failed DNS lookup.
}
}
There's also the InetAddress#isReachable():
boolean reachable = InetAddress.getByName(hostname).isReachable();
This however doesn't explicitly test port 80. You risk to get false negatives due to a Firewall blocking other ports.
Do I have to somehow close the connection?
No, you don't explicitly need. It's handled and pooled under the hoods.
I suppose this is a GET request. Is there a way to send HEAD instead?
You can cast the obtained URLConnection to HttpURLConnection and then use setRequestMethod() to set the request method. However, you need to take into account that some poor webapps or homegrown servers may return HTTP 405 error for a HEAD (i.e. not available, not implemented, not allowed) while a GET works perfectly fine. Using GET is more reliable in case you intend to verify links/resources not domains/hosts.
Testing the server for availability is not enough in my case, I need to test the URL (the webapp may not be deployed)
Indeed, connecting a host only informs if the host is available, not if the content is available. It can as good happen that a webserver has started without problems, but the webapp failed to deploy during server's start. This will however usually not cause the entire server to go down. You can determine that by checking if the HTTP response code is 200.
HttpURLConnection connection = (HttpURLConnection) new URL(url).openConnection();
connection.setRequestMethod("HEAD");
int responseCode = connection.getResponseCode();
if (responseCode != 200) {
// Not OK.
}
// < 100 is undetermined.
// 1nn is informal (shouldn't happen on a GET/HEAD)
// 2nn is success
// 3nn is redirect
// 4nn is client error
// 5nn is server error
For more detail about response status codes see RFC 2616 section 10. Calling connect() is by the way not needed if you're determining the response data. It will implicitly connect.
For future reference, here's a complete example in flavor of an utility method, also taking account with timeouts:
/**
* Pings a HTTP URL. This effectively sends a HEAD request and returns <code>true</code> if the response code is in
* the 200-399 range.
* #param url The HTTP URL to be pinged.
* #param timeout The timeout in millis for both the connection timeout and the response read timeout. Note that
* the total timeout is effectively two times the given timeout.
* #return <code>true</code> if the given HTTP URL has returned response code 200-399 on a HEAD request within the
* given timeout, otherwise <code>false</code>.
*/
public static boolean pingURL(String url, int timeout) {
url = url.replaceFirst("^https", "http"); // Otherwise an exception may be thrown on invalid SSL certificates.
try {
HttpURLConnection connection = (HttpURLConnection) new URL(url).openConnection();
connection.setConnectTimeout(timeout);
connection.setReadTimeout(timeout);
connection.setRequestMethod("HEAD");
int responseCode = connection.getResponseCode();
return (200 <= responseCode && responseCode <= 399);
} catch (IOException exception) {
return false;
}
}
Instead of using URLConnection use HttpURLConnection by calling openConnection() on your URL object.
Then use getResponseCode() will give you the HTTP response once you've read from the connection.
here is code:
HttpURLConnection connection = null;
try {
URL u = new URL("http://www.google.com/");
connection = (HttpURLConnection) u.openConnection();
connection.setRequestMethod("HEAD");
int code = connection.getResponseCode();
System.out.println("" + code);
// You can determine on HTTP return code received. 200 is success.
} catch (MalformedURLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} finally {
if (connection != null) {
connection.disconnect();
}
}
Also check similar question How to check if a URL exists or returns 404 with Java?
Hope this helps.
You could also use HttpURLConnection, which allows you to set the request method (to HEAD for example). Here's an example that shows how to send a request, read the response, and disconnect.
The following code performs a HEAD request to check whether the website is available or not.
public static boolean isReachable(String targetUrl) throws IOException
{
HttpURLConnection httpUrlConnection = (HttpURLConnection) new URL(
targetUrl).openConnection();
httpUrlConnection.setRequestMethod("HEAD");
try
{
int responseCode = httpUrlConnection.getResponseCode();
return responseCode == HttpURLConnection.HTTP_OK;
} catch (UnknownHostException noInternetConnection)
{
return false;
}
}
public boolean isOnline() {
Runtime runtime = Runtime.getRuntime();
try {
Process ipProcess = runtime.exec("/system/bin/ping -c 1 8.8.8.8");
int exitValue = ipProcess.waitFor();
return (exitValue == 0);
} catch (IOException | InterruptedException e) { e.printStackTrace(); }
return false;
}
Possible Questions
Is this really fast enough?Yes, very fast!
Couldn’t I just ping my own page, which I want
to request anyways? Sure! You could even check both, if you want to
differentiate between “internet connection available” and your own
servers beeing reachable What if the DNS is down? Google DNS (e.g.
8.8.8.8) is the largest public DNS service in the world. As of 2013 it serves 130 billion requests a day. Let ‘s just say, your app not
responding would probably not be the talk of the day.
read the link. its seems very good
EDIT:
in my exp of using it, it's not as fast as this method:
public boolean isOnline() {
NetworkInfo netInfo = connectivityManager.getActiveNetworkInfo();
return netInfo != null && netInfo.isConnectedOrConnecting();
}
they are a bit different but in the functionality for just checking the connection to internet the first method may become slow due to the connection variables.
Consider using the Restlet framework, which has great semantics for this sort of thing. It's powerful and flexible.
The code could be as simple as:
Client client = new Client(Protocol.HTTP);
Response response = client.get(url);
if (response.getStatus().isError()) {
// uh oh!
}