Proper way to test if server is up in Java? - java

What would be the proper way to simply see if a connection to a website/server can be made? I want this for an application I am coding that will just alert me if my website goes offline.
Thanks!

You can use an HttpURLConnection to send a request and check the response body for text that is unique to that page (rather than just checking to see if there's a response at all, just in case an error or maintenance page or something is being served).
Apache Commons has a library that removes a lot of the boiler plate of making Http requests in Java.
I've never done anything like this specifically on Android, but I'd be surprised if it's any different.
Here's a quick example:
URL url = new URL(URL_TO_APPLICATION);
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
InputStream stream = connection.getInputStream();
Scanner scanner = new Scanner(stream); // You can read the stream however you want. Scanner was just an easy example
boolean found = false;
while(scanner.hasNext()) {
String next = scanner.next();
if(TOKEN.equals(next)) {
found = true;
break;
}
}
if(found) {
doSomethingAwesome();
} else {
throw aFit();
}

You want to also set the connection timeout using setConnectTimeout(int timeout) and setReadTimeout(int timeout). Otherwise the code might hang for a long time waiting for a non-responding server to reply.

Related

UrlConnection API Call takes much more time the first time, then onwards it is comparable to curl.exe or postman

I have observed that one of my api is taking much more time if called through Java (URLConnection or Apache Http Client or OKHttp) for the first time. For the subsequent calls, the time is much lesser.
Although Postman or curl.exe takes very less time(comparable to the second iterations of java)
For my machine, the first time overhead is around 2 secs. But on some machines this is rising to around 5-6 secs for the first time. Thereafter it is around 300 ms roundtrip.
Here is my sample code:
public static String DoPostUsingURLConnection(String s_uri) throws Exception {
try {
URL uri = new URL(s_uri);
HttpURLConnection connection = (HttpURLConnection) uri.openConnection();
// Logger.log("Opened Connection");
connection.setRequestMethod("POST");
connection.setRequestProperty("Content-Type", "application/json");
connection.setDoOutput(true);
connection.setRequestProperty("Authorization", authorizationHeader);
// Create the Request Body
try (OutputStream os = connection.getOutputStream()) {
byte[] input = jsonRequestBody.getBytes("utf-8");
os.write(input, 0, input.length);
}
// Logger.log("Written Output Stream");
int responseCode = connection.getResponseCode();
InputStream is = null;
if (responseCode == HttpURLConnection.HTTP_OK)
is = connection.getInputStream();
else
is = connection.getErrorStream();
BufferedReader in = new BufferedReader(new InputStreamReader(is));
String inputLine;
StringBuffer response = new StringBuffer();
while ((inputLine = in.readLine()) != null) {
response.append(inputLine).append("\n");
;
}
in.close();
return response.toString();
} catch (Exception ex) {
return ex.getMessage();
} finally {
// Logger.log("Got full response");
}
You can investigate where time is taken by logging OkHttp connections events.
https://square.github.io/okhttp/events/
It will be particularly relevant if you are getting an IPv4 address and IPv6 and one is timing out and the other one succeeding.
This is just a guess. But the way Http connection works, that when you invoke it for the first time the connection gets established and that takes time. After that Http protocol doesn't really close connection for some time in expectation that some more requests would come and the connection could be re-used. And in your case indeed you send subsequent requests that re-use the previously created connection rather then re-establishing it which is expansive. I have written my own Open Source library that has a simplistic Http client in it. I noticed the same effect that first request takes much longer time than subsequent requests. But that doesn't explain why in Postman and curl we don't see the same effect. Anyway, if you want to solve this problem and you know your URL in advance, send a request upon your app initialization (you can even do it in separate thread). That will solve your problem.
If you are interested to look at my library here is Javadoc link. You can find it as maven artifact here and on github here. Article about the library covering partial list of features here

HttpURLConnection FileNotFoundException on large request properties

I'm using HttpURLConnection to send JSON data from an Android Application to my Tomcat Server.
The POST works fine with small sized JSONs. On bigger data sets it fails with a FileNotFoundException.
What can it be?
Here's the code:
try {
URL url = new URL(urlIn);
strOut = "";
huc = (HttpURLConnection) url.openConnection();
huc.setRequestProperty("Connection", "Close");
huc.setRequestMethod("POST");
huc.setRequestProperty("User", userId);
huc.setRequestProperty("Action", action);
huc.setRequestProperty("JSON", jsonData);
huc.setConnectTimeout(10000);
in = new BufferedReader(new InputStreamReader(huc.getInputStream()));
while ((inputLine = in.readLine()) != null){
if (strOut.equalsIgnoreCase("")){
strOut = inputLine;
} else {
strOut = strOut + inputLine;
}
}
} catch (Exception e) {
strOut = "";
e.printStackTrace();
}
When jsonData get to a certain size (arround 10000 chars), the POST fails with the error mentioned. The content of the JSON does not have any special character.
Thanks in advance.
Best regards, Federico.
HTTPUrlConnection throws a FileNotFoundException if the server responds with a 404 response code, so the reason why this happens seems to be located on the server side rather than the client side. Most likely the server is configured to accept request headers up to a particular length and will return an error if that size is exceeded. A short Google-search brought up a couple of results, sizes of 16 KB are mentioned but shorter values are also reasonable.
As I mentioned in my comment to your question, you should change your process to receive the JSON-data (and the other values for User and Action as well BTW) as part of the request body, e.g. as url-encoded query string or as multipart formdata. Both ways are supported by HTTP client libraries you can use or are easily built manually.
After lots of reading and trying I gave up with configuring Tomcat to accept larger headers.
So I convinced the team in charge of the Tomcat app to make a servlet that is able to receive this data in the body, just as Lothar suggested.
Thanks!

Incrementally handling twitter's streaming api using apache httpclient?

I am using Apache HTTPClient 4 to connect to twitter's streaming api with default level access. It works perfectly well in the beginning but after a few minutes of retrieving data it bails out with this error:
2012-03-28 16:17:00,040 DEBUG org.apache.http.impl.conn.SingleClientConnManager: Get connection for route HttpRoute[{tls}->http://myproxy:80->https://stream.twitter.com:443]
2012-03-28 16:17:00,040 WARN com.cloudera.flume.core.connector.DirectDriver: Exception in source: TestTwitterSource
java.lang.IllegalStateException: Invalid use of SingleClientConnManager: connection still allocated.
at org.apache.http.impl.conn.SingleClientConnManager.getConnection(SingleClientConnManager.java:216)
Make sure to release the connection before allocating another one.
at org.apache.http.impl.conn.SingleClientConnManager$1.getConnection(SingleClientConnManager.java:190)
I understand why I am facing this issue. I am trying to use this HttpClient in a flume cluster as a flume source. The code looks like this:
public Event next() throws IOException, InterruptedException {
try {
HttpHost target = new HttpHost("stream.twitter.com", 443, "https");
new BasicHttpContext();
HttpPost httpPost = new HttpPost("/1/statuses/filter.json");
StringEntity postEntity = new StringEntity("track=birthday",
"UTF-8");
postEntity.setContentType("application/x-www-form-urlencoded");
httpPost.setEntity(postEntity);
HttpResponse response = httpClient.execute(target, httpPost,
new BasicHttpContext());
BufferedReader reader = new BufferedReader(new InputStreamReader(
response.getEntity().getContent()));
String line = null;
StringBuffer buffer = new StringBuffer();
while ((line = reader.readLine()) != null) {
buffer.append(line);
if(buffer.length()>30000) break;
}
return new EventImpl(buffer.toString().getBytes());
} catch (IOException ie) {
throw ie;
}
}
I am trying to buffer 30,000 characters in the response stream to a StringBuffer and then return this as the data received. I am obviously not closing the connection - but I do not want to close it just yet I guess. Twitter's dev guide talks about this here It reads:
Some HTTP client libraries only return the response body after the
connection has been closed by the server. These clients will not work
for accessing the Streaming API. You must use an HTTP client that will
return response data incrementally. Most robust HTTP client libraries
will provide this functionality. The Apache HttpClient will handle
this use case, for example.
It clearly tells you that HttpClient will return response data incrementally. I've gone through the examples and tutorials, but I haven't found anything that comes close to doing this. If you guys have used a httpclient (if not apache) and read the streaming api of twitter incrementally, please let me know how you achieved this feat. Those who haven't, please feel free to contribute to answers. TIA.
UPDATE
I tried doing this: 1) I moved obtaining stream handle to the open method of the flume source. 2) Using a simple inpustream and reading data into a bytebuffer. So here is what the method body looks like now:
byte[] buffer = new byte[30000];
while (true) {
int count = instream.read(buffer);
if (count == -1)
continue;
else
break;
}
return new EventImpl(buffer);
This works to an extent - I get tweets, they are nicely being written to a destination. The problem is with the instream.read(buffer) return value. Even when there is no data on the stream, and the buffer has default \u0000 bytes and 30,000 of them, so this value is getting written to the destination. So the destination file looks like this.. " tweets..tweets..tweeets.. \u0000\u0000\u0000\u0000\u0000\u0000\u0000...tweets..tweets... ". I understand the count won't return a -1 coz this is a never ending stream, so how do I figure out if the buffer has new content from the read command?
The problem is that your code is leaking connections. Please make sure that no matter what you either close the content stream or abort the request.
InputStream instream = response.getEntity().getContent();
try {
BufferedReader reader = new BufferedReader(
new InputStreamReader(instream));
String line = null;
StringBuffer buffer = new StringBuffer();
while ((line = reader.readLine()) != null) {
buffer.append(line);
if (buffer.length()>30000) {
httpPost.abort();
// connection will not be re-used
break;
}
}
return new EventImpl(buffer.toString().getBytes());
} finally {
// if request is not aborted the connection can be re-used
try {
instream.close();
} catch (IOException ex) {
// log or ignore
}
}
It turns out that it was a flume issue. Flume is optimized to transfer events of size 32kb. Anything beyond 32kb, Flume bails out. (The workaround is to tune event size to be greater than 32KB). So, I've changed my code to buffer 20,000 characters at least. It kind of works, but it is not fool proof. This can still fail if the buffer length exceeds 32kb, however, it hasn't failed so far in an hour of testing - I believe it has to do with the fact that Twitter doesn't send a lot of data on its public stream.
while ((line = reader.readLine()) != null) {
buffer.append(line);
if(buffer.length()>20000) break;
}

Servlet that sends back JSON: Confusion on reception

I have a Servlet that sends back a JSON Object and I would like to use this servlet in another Java project. I have this method that gets me the results:
public JSONArray getSQL(String aServletURL)
{
JSONArray toReturn = null;
String returnString = "";
try
{
URL myUrl = new URL(aServletURL);
URLConnection conn = myUrl.openConnection();
conn.setDoOutput(true);
BufferedReader in = new BufferedReader( new InputStreamReader( conn.getInputStream() ) );
String s;
while ((s = in.readLine()) != null )
returnString += s;
in.close();
toReturn = new JSONArray(returnString);
}
catch(Exception e)
{
return new JSONArray();
}
return toReturn;
}
This works pretty will, but the problem I am facing is the following:
When I do several simultaneous requests, the results get mixed up and I sometimes get a Response that does not match the request I send.
I suspect the problem to be related to the way I get the response back: The Reader reading a String from the InputStream of the connection.
How can I make sure that I get one reques -> one corresponding reply ?
Is there a better way to retrieve my JSON object from my servlet ?
Cheers,
Tim
When I do several simultaneous requests, the results get mixed up and I sometimes get a Response that does not match the request I send.
Your servlet is not thread safe. I'd bet that you've improperly assigned request scoped data either directly or indirectly as instance or class variables of the servlet. This is a common beginner's mistake.
Carefully read this How do servlets work? Instantiation, sessions, shared variables and multithreading and fix your servlet code accordingly. The problem is not in the URLConnection code shown so far, although it indicates that you're doing exactly the same job in both doGet() and doPost(), which in turn is already a smell as to how the servlet is designed.
Try removing setDoOutput(true), you are using the connection only for input and so you shouldn't use it.
Edit: alternatively try using HttpClient, it's much nicer that using "raw" Java.

HttpClient response handler always returns closed stream

I'm new to Java development so please bear with me. Also, I hope I'm not the champion of tl;dr :).
I'm using HttpClient to make requests over Http (duh!) and I'd gotten it to work for a simple servlet that receives an URL as a query string parameter. I realized that my code could use some refactoring, so I decided to make my own HttpResponseHandler, to clean up the code, make it reusable and improve exception handling.
I currently have something like this:
public class HttpResponseHandler implements ResponseHandler<InputStream>{
public InputStream handleResponse(HttpResponse response)
throws ClientProtocolException, IOException {
int statusCode = response.getStatusLine().getStatusCode();
InputStream in = null;
if (statusCode != HttpStatus.SC_OK) {
throw new HttpResponseException(statusCode, null);
} else {
HttpEntity entity = response.getEntity();
if (entity != null) {
in = entity.getContent();
// This works
// for (int i;(i = in.read()) >= 0;) System.out.print((char)i);
}
}
return in;
}
}
And in the method where I make the actual request:
HttpClient httpclient = new DefaultHttpClient();
HttpGet httpget = new HttpGet(target);
ResponseHandler<InputStream> httpResponseHandler = new HttpResponseHandler();
try {
InputStream in = httpclient.execute(httpget, httpResponseHandler);
// This doesn't work
// for (int i;(i = in.read()) >= 0;) System.out.print((char)i);
return in;
} catch (HttpResponseException e) {
throw new HttpResponseException(e.getStatusCode(), null);
}
The problem is that the input stream returned from the handler is closed. I don't have any idea why, but I've checked it with the prints in my code (and no, I haven't used them both at the same time :). While the first print works, the other one gives a closed stream error.
I need InputStreams, because all my other methods expect an InputStream and not a String. Also, I want to be able to retrieve images (or maybe other types of files), not just text files.
I can work around this pretty easily by giving up on the response handler (I have a working implementation that doesn't use it), but I'm pretty curious about the following:
Why does it do what it does?
How do I open the stream, if something closes it?
What's the right way to do this, anyway :)?
I've checked the docs and I couldn't find anything useful regarding this issue. To save you a bit of Googling, here's the Javadoc and here's the HttpClient tutorial (Section 1.1.8 - Response handlers).
Thanks,
Alex
It closes the stream because ResponseHandler must handle the whole response. Even if you get an open stream, it should be at the end of stream.
The stream is closed by BasicHttpEntity's consumeContent() call to ensure you don't read from the stream again.
In your case, you don't really need ResponseHandler.
The automatic resource management which is called closes the stream for you to make sure all resources are freed and ready for the next task.
If you want streams then you best bet is to copy it to a ByteArray and return a ByteArrayInputStream if the content is relatively modest.
If the content is not modest, then you'll have to do the resource management your self and not the the ResponseHandler.

Categories