Android okHttp Retry policy - java

I am trying to create a simple wrapper which will call the server download the information and parse the binary data sent .
for the connection I am using the library called okhttp , since the connection on 3G is not very reliable I have decided to implement a very simple re-try functionality using the following function**(Note this method will be always called from a background thread)**
private InputStream callServer() throws ServerException, NoNetworkAvailableException, ConnectionErrorException {
NetworkOperation networkOperation = getNetworkOperation();
InputStream inputStream = null;
//in case of network problems we will retry 3 times separated by 5 seconds before gave up
while (connectionFailedRetryCounter < connectionFailedMaximumAllowedRetries()) {
connectionFailedRetryCounter++;
try {
inputStream = networkOperation.execute();
break;//if this line was reached it means a successfull operation, no need to retry .
} catch (ConnectionErrorException e) {
if (canRetryToConnect()) {
Utils.forceSleepThread(Constants.Communications.ConnectionFailedTrialCounter.SLEEP_BETWEEN_REQUESTS_MILLI);//retry after 5 secs (Thread.sleep)
} else {
throw e;//I give up
}
}
}
return inputStream;
}
private boolean canRetryToConnect() {
return (connectionFailedRetryCounter < connectionFailedMaximumAllowedRetries()) && !canceled;
}
Is this the right way to do this ? or is it already done by the library it self(there is no need to implement anything like this) ?
Here is what the method execute() do
public InputStream execute() throws ConnectionErrorException, NoNetworkAvailableException, ServerException {
if (!Utils.isNetworkAvailable(context)) {
throw new NoNetworkAvailableException();
}
Response response = doExecute();
if (!response.isSuccessful()) {
throw new ServerException(response.code());
}
return response.body().byteStream();
}
private Response doExecute() throws ConnectionErrorException {
Response response;
try {
if (getRequestType() == RequestType.GET) {
response = executeGet();
} else {
response = executePost();
}
} catch (IOException e) {
throw new ConnectionErrorException();
}
return response;
}

You can avoid retrying if you catch NoNetworkAvailableException. Don't retry if you know following attempts will fail anyway.
I would make connectionFailedMaximumAllowedRetries() a constant. I doubt you will need to change the variable at any point.
Implement exponential back off. You could have it retry 10 times. Each time, you multiply the delay by 2 (with a cap of a few minutes). For example:
Try call - failed
Wait 1 second
Try call - failed
Wait 2 seconds
Try call - failed
Wait 4 seconds
...
Try call - succeeded
This is very typical behaviour. In the event of a short outage, the call will be made again very quickly and succeed. In the event of a longer outage, you don't want to be calling constantly every few seconds. This gives your code the best chance of having its call go through. Obviously, attention should be made to not annoy the user if this call is required for a UI change.

Related

How to wait for another application (eXist-db) to start (using java), before interacting with it?

I am currently working with eXist-db, and what I want to accomplish is executing command line script to start eXist-db (/bin/startup.sh) wait for it to create database so I can get collection from it.
//start database
try {
Runtime.getRuntime().exec(path + start);
} catch (IOException ex) {
return false;
}
//get collection
col = DatabaseManager.getCollection(URI + "/db", username, password);
I want to wait with the getCollection until database is created (can be called) , or after certain amount of waiting time if the database doesn't initialise I would like to kill it (lets say one minute at most). What is the best solution for this problem? Using sleep/wait several times and trying to call database? Something like this?
Process pr = null;
try {
pr = Runtime.getRuntime().exec(path + start);
} catch (IOException ex) {
return false;
}
for (int i = 0; i < 60; i++) {
try {
Thread.sleep(1000);
} catch (InterruptedException ex) {
pr.destroy();
return false;
}
try {
dbDetected = initCollection();
} catch (XMLDBException ex) {
if (ex.errorCode != ErrorCodes.VENDOR_ERROR ||
"Failed to read server's response: Connection refused (Connection refused))"
.compareTo(ex.getMessage()) != 0 ) {
pr.destroy();
return false;
}
}
And as to killing part, I would like to confirm the assumption that storing the process and killing it using Process.destroy() function should be enough (basing it on assumption that the script for database is taking too long, in normal runtime, at the end of my application I would use provided eXist-db script /bin/shutdown.sh).
Rather than using startup.sh, if you are running in embedded mode, then you can use ExistEmbeddedServer (or it might be called EmbeddedExistServer, sorry I am away from my computer for a few days) class from the test package instead.
I don't think you can use startup.sh directly for your purpose, as it creates a foreground process. Instead you should start eXist from your Java application as described above.

Can't detect disconnect without extra readLine() loop

I am developing a program that uses sockets and currently I have a function in my code that checks for a heartbeat from the client every second.
private void userLoop() { // checks for incoming data from client
Timer t = new Timer();
t.schedule(new TimerTask() {
#Override
public void run() {
try {
socketIn.read(); // check for heartbeat from client
String userInput;
while ((userInput = br.readLine()) != null) {
}
} catch (Exception e) {
ControlPanel.model.removeElement(getUsername());
ControlPanel.append(getUsername() + " has disconnected.");
}
}
}, 1000);
}
When a client closes the game via the X button, shutting off their computer, logging out, whatever it may be, I get the message "'username' has disconnected". This is exactly what I want, however, it only works with the while loop in the code. The while loop essentially does nothing and I have no idea why it doesn't work with out.
If I remove the while loop and I disconnect using my client nothing gets printed out server sided.
String userInput;
while ((userInput = br.readLine()) != null) {
}
The above is essentially the dead code that does nothing but without it my program doesn't work the way it should..
Why is the code needed and how can I remove it and still make my program work correctly?
In this case, your while loop is essentially stalling your program until you no longer receive an input string. It's not dead code; it is just your way of installing a wait.
Otherwise, based on my understanding in the Timer class, it only waits one second, which might be too short of a timespan for what you're waiting to capture.
I fixed my problem by changing everything in the try block with
br.readLine();
There's a saying I've heard about exception handling: "Exceptions should only be used for exceptional situations." A client disconnecting from a server is not exceptional.
Now that I have that off my chest, let's move on. According to this other question,
socket.getInputSteam.read() does not throw when I close the socket from the client
it sounds like the read call won't throw if you're closing things properly on the client side.
The problem is that when the remote socket is closed, read() does not throw an Exception, it just returns -1 to signal the end of the stream.
The following should work without needing to call readLine():
try {
int ret = socketIn.read(); // check for heartbeat from client
if (ret == -1) {
// Remote side closed gracefully
clientDisconnected();
}
} catch (SocketTimeoutException e) {
// Timeout -- handle as required
handleTimeout();
} catch (IOException e) {
// Connection lost due to I/O error
clientDisconnected()
}

How to set repetition when timeout exception occurs

I've used JAXWS-RI 2.1 to create an interface for my web service, based on a WSDL. I can interact with the web service no problems, but haven't been able to specify repetition when SocketTimeoutException:
try {
final Response response = service.serviceName(params);
} catch (SocketTimeoutException e) {
}
is there a way how to specify it in service or I need to code this ?
for example I would set for 3 repetation and when after 3 exception there will be stil timemout so this exception will be thrown
There isn't a native way to do this (I suspect you're coming from Ruby, where this is a language feature). You will need to loop, then break on success e.g.
for (int i = 0 ; i < 3 ; i++) {
try {
final Response response = service.serviceName(params);
break;
} catch (SocketTimeoutException e) {
Thread.getCurrentThread().sleep(10 * 1000);
}
}

Thread safety with ExecutorService and CountDownLatch

I have cycle, where i download image, I need to load for example 10 images and merge them in one image. In my interest what images will all loaded. This is how i do that.
I have executor for limit thread count, and i have CountDownLatch barrier which waiting until all images will be loaded.
CountDownLatch barrier = new CountDownLatch(images.size());
private static ExecutorService executorService = Executors.newFixedThreadPool(MAX_THREAD_POOL);
for (Image image : images) {
executorService.execute(new ImageRunnable(image, barrier));
}
barrier.await();
In ImageRunnable i download image like this. From google static map.
String url ="my url"
try {
URL target = new URL(url);
ImageIO.read(target);
barrier.countDown();
//exaggerated logic
} catch (IOException e) {
System.out.println("Can not load image, " + e);
}
Other people said to me that i can get case when all threads in executor will be busy and my algorithm never ends because he will wait until all threads get barrier.await() point (deadlock). How said to me it's will happen when ImageIO.read(target) called and connection will established but HTTP session never be closed (response from server do not come back). This can happen? I thought in this case i get some exception and bad thread will interrupted. Exactly that happens when I start my cycle but on third image i close internet connection by firewall. On output I get broken image like network was closed and image not loaded to end. Am I wrong?
The concern is you may throw an exception and never count down your latch.
I would consider doing this:
String url ="my url"
try {
URL target = new URL(url);
ImageIO.read(target);
} catch (IOException e) {
System.out.println("Can not load image, " + e);
throw e;
} finally {
barrier.countDown();
}
Throw the exception to let the world know you've run into a problem and may not be able to complete (you know you can't recover from it) but at the very least let the barrier get lowered. I'd rather have to deal with an exception than a deadlock.
Just to flesh out my comment:
CompletionService<Image> service = new ExecutorCompletionService<Image>(
Executors.newFixedThreadPool(nThreads));
for (Image image : images) {
service.submit(new ImageRunnable(image), image);
}
try {
for (int i = 0; i < images.size(); i++) {
service.take();
}
} catch (InterruptedException e) {
// someone wants this thread to cancel peacefully; either exit the thread
// or at a bare minimum do this to pass the interruption up
Thread.currentThread().interrupt();
}
There. That's it.
If you're concerned about enforcing timeouts on the HTTP connection, my quick and dirty research suggests something like...
URL target = // whatever;
URLConnection connection = target.openConnection();
connection.setReadTimeout(timeoutInMilliseconds);
InputStream stream;
try {
stream = connection.getInputStream();
return ImageIO.read(stream);
} finally {
if (stream != null) { stream.close(); }
}
Apart from moving barrier.countDown() to finally block as suggested by #corsiKa, make sure your code ever finishes. Set some timeout on reading URL and on await():
barrier.await(1, TimeUnit.MINUTES);

Java InputStream Locking

I am using an InputStream to stream a file over the network.
However if my network goes down the the process of reading the file the read method blocks and is never recovers if the network reappears.
I was wondering how I should handle this case and should some exception not be thrown if the InputStream goes away.
Code is like this.
Url someUrl = new Url("http://somefile.com");
InputStream inputStream = someUrl.openStream();
byte[] byteArray = new byte[];
int size = 1024;
inputStream.read(byteArray,0,size);
So somewhere after calling read the network goes down and the read method blocks.
How can i deal with this situation as the read doesn't seem to throw an exception.
From looking at the documentation here:
http://docs.oracle.com/javase/6/docs/api/java/io/InputStream.html
It looks like read does throw an exception.
There are a few options to solve your specific problem.
One option is to track the progress of the download, and keep that status elsewhere in your program. Then, if the download fails, you can restart it and resume at the point of failure.
However, I would instead restart the download if it fails. You will need to restart it anyway so you might as well redo the whole thing from the beginning if there is a failure.
The short answer is to use Selectors from the nio package. They allow non-blocking network operations.
If you intend to use old sockets, you may try some code samples from here
Have a separate Thread running that has a reference to your InputStream, and have something to reset its timer after the last data has been received - or something similar to it. If that flag has not been reset after N seconds, then have the Thread close the InputStream. The read(...) will throw an IOException and you can recover from it then.
What you need is similar to a watchdog. Something like this:
public class WatchDogThread extends Thread
{
private final Runnable timeoutAction;
private final AtomicLong lastPoke = new AtomicLong( System.currentTimeMillis() );
private final long maxWaitTime;
public WatchDogThread( Runnable timeoutAction, long maxWaitTime )
{
this.timeoutAction = timeoutAction;
this.maxWaitTime = maxWaitTime;
}
public void poke()
{
lastPoke.set( System.currentTimeMillis() );
}
public void run()
{
while( Thread.interrupted() ) {
if( lastPoke.get() + maxWaitTime < System.currentTimeMillis() ) {
timeoutAction.run();
break;
}
try {
Thread.sleep( 1000 );
} catch( InterruptedException e ) {
break;
}
}
}
}
public class Example
{
public void method() throws IOException
{
final InputStream is = null;
WatchDogThread watchDog =
new WatchDogThread(
new Runnable()
{
#Override
public void run()
{
try {
is.close();
} catch( IOException e ) {
System.err.println( "Failed to close: " + e.getMessage() );
}
}
},
10000
);
watchDog.start();
try {
is.read();
watchDog.poke();
} finally {
watchDog.interrupt();
}
}
}
EDIT:
As noticed, sockets have a timeout already. This would be preferred over doing a watchdog thread.
The function inputStream.read() is blocking function and it should be called in a thread.
There is alternate way of avoiding this situation. The InputStream also has a method available(). It returns the number of bytes that can be read from the stream.
Call the read method only if there are some bytes available in the stream.
int length = 0;
int ret = in.available();
if(ret != 0){
length = in.read(recv);
}
InputStream does throw the IOException. Hope this information is useful to you.
This isn't a big deal. All you need to do is set a timeout on your connection.
URL url = ...;
URLConnection conn = URL.openConnection();
conn.setConnectTimeout( 30000 );
conn.setReadTimeout(15000);
InputStream is = conn.openStream();
Eventually, one of the following things will happen. Your network will come back, and your transfers will resume, the TCP stack will eventually timeout in which case an exception IS thrown, or the socket will get a socket closed/reset exception and you'll get an IOException. In all cases the thread will let go of the read() call, and your thread will return to the pool ready to service other requests without you having to do anything extra.
For example, if your network goes out you won't be getting any new connections coming in, so the fact that this thread is tied up isn't going to make any difference because you don't have connections coming in. So your network going out isn't the problem.
More likely scenario is the server you are talking to could get jammed up and stop sending you data which would slow down your clients as well. This is where tuning your timeouts is important over writing more code, using NIO, or separate threads, etc. Separate threads will just increase your machine's load, and in the end force you to abandon the thread after a timeout which is exactly what TCP already gives you. You also could tear your server up because you are creating a new thread for every request, and if you start abandoning threads you could easily wind up with 100's of threads all sitting around waiting for a timeout on there socket.
If you have a high volume of traffic on your server going through this method, and any hold up in response time from a dependency, like an external server, is going to affect your response time. So you will have to figure out how long you are willing to wait before you just error out and tell the client to try again because the server you're reading this file from isn't giving it up fast enough.
Other ideas are caching the file locally, trying to limit your network trips, etc to limit your exposure to an unresponsive peer. The exact same thing can happen with databases on external servers. If your DB doesn't send you a responses fast enough it can jam up your thread pool just like a file that doesn't come down quick enough. So why worry any differently about file servers? More error handling isn't going fix your problem, and it will just make your code obtuse.

Categories