Java program and network connection timeout - java

I have a java program that gather data from the web.
Unfortunately, my network has a problem and it goes off and on.
I need a trick to ask my program wait until the connection is on again and continue its job.
I use the "URLConnection" library to connect. I made a loop to reconnect when it a "ConnectException" is catched, but it doesn't work.
Any suggestions?

my network has a problem and it goes off and on. I need a trick to ask
my program wait until the connection is on again and continue its job
It depends on what your program's purpose is.
How long are these intermittent failures? If they are short enough you could use setConnectTimeout(0) to indicate an infinite timeout period while trying to connect but if your program has to report something back then it is not a good option for the end user.
You could set a relatively low timeout so that when you start to lose the network you will get a java.net.SocketTimeoutException.
When you catch this, you could wait for a period and try again in a loop e.g. for 3 times and then perhaps report a failure.
It depends on what you are trying to do.

Related

Thread.sleep() taking longer than expected?

We have a Java client/server RMI application that uses a Persistence Framework. Upon starting a client session we start the following thread:
Thread checkInThread = new Thread() {
public void run() {
while(true) {
try {
getServer().checkIn(userId);
}
catch(Exception ex) {
JOptionPane.showMessageDialog(parentFrame, "The connection to the server was lost.");
ex.printStackTrace();
}
try {
Thread.sleep(15000);
}
catch(InterruptedException e) {
}
}
}
};
This is used to keep track of whether a client session loses connection to the server. If the client does not check in for 45 seconds, then there are a number of things we need to clean up from that client's session. Upon their next check in after they've gone beyond the 45 seconds threshold we boot them from the system which then allows them to log back in. In theory the only time this should happen is if the client PC loses connectivity to the server.
However, we have come across scenarios where the thread runs just fine and checks in every 15 seconds and then for an unknown reason, the thread will just go out to lunch for 45+ seconds. Eventually the client will check back in, but it seems like something is blocking the execution of the thread during that time. We have experienced this using both Swing and JavaFX on the client side. The client/server are only compatible with Windows OS.
Is there an easy way to figure out what is causing this to happen, or a better approach to take to make sure the check ins occur regularly at 15 second intervals assuming their is connectivity between client and server?
getServer().checkIn(userId);
getServer or checkIn functions may take more than 15 seconds to return, then for that reason
the thread will just go out to lunch for 45+ seconds.
This can happen when the client machine goes into sleep or hibernate mode. Usually when it's a laptop that just had its cover closed.
There can also be temporary network outages that last for >15 seconds, but allow connections to resume automatically when the network comes back. In this case, the client can be stuck in .checkIn(), not sleep()
You should absolutely and positively not do this. There is no such thing as a connection in RMI, ergo you are testing for a condtion that does not exist. You are also interfering with RMI's connection pooling. The correct way to accomplish what you're attempting is via the remote session pattern and the Unreferenced interface. RMI can already tell you when a client loses connectivity, without all this overhead. 'Still connected' has no meaning in RMI'.

URLConnection, why two different timeouts? (connect and read) [duplicate]

This question already has answers here:
What is the difference between connection and read timeout for sockets?
(2 answers)
Closed 8 years ago.
Just curiosity. Is there a good reason why the class URLConnection needs to have two different timeouts?
The connectTimeout is the maximum time in milliseconds to wait while connecting. Connecting to a server will fail with a SocketTimeoutException if the timeout elapses before a connection is established.
The readTimeout is the maximum time to wait for an input stream read to complete before giving up. Reading will fail with a SocketTimeoutException if the timeout elapses before data becomes available.
Can you give me a good reason why these two values should be different? Why a call would need more time for performing the connection rather than receiving some data (or viceversa)?
I am asking this because I have to configure these values and my idea is to set the same value for both.
Let's say server is busy and is configured to accept 'N' connection and all the connections are long runner and all of sudden you send in request, What should happen? Should you wait indefinitely or should you time out? That's connectTimeout.
While let's say your server turns brain dead service just accepting connection and doing nothing with it (or say server synchronously goes to db and does some time taking activity and server ends up with deadlock for e.g.) and on the other hand client keeps on waiting for the response, in this case what should client do? Should it wait indefinitely for response or should it timeout? That's read timeout.
The connection timeout is how long you're prepared to wait to get some sort of response from the server. It's not particularly related to what it is that you're trying to achieve.
But suppose you had a service that would allow you to give it a large number, and have it return its prime factors. The server might take quite a while to generate the answer and send it to you.
You might well have clear expectations that the server would quickly respond to the connection: maybe even a delay of 5 seconds here tells you that the server is likely to be down. But the read timeout might need to be much higher: it might be a few minutes before you get to be able to read the server's answer to your query.
The connect time-out is the time-out in which you want a (in normal situations TCP) connection to be established. The default time-outs as specified in the internet RFCs and implemented by the various OSes are normally in the minute(s) range. But we know that if a server is available and reachable, it will respond in a matter of milli-seconds and otherwise not at all. A normal value would be a couple of seconds at a maximum.
The read timeout is the time in which the server is expected to respond after it received the incoming request. Read time-outs therefore depend on time within you expect the server to deliver the result. These are depending on the type of the request you are making and should be larger if the processing requires some time or the server may be very busy in some situations. Especially if you do a retry after a read time-out, it is best to put the read time-outs not too low, normally a factor 3-4 times the expected time.

Gathering strings from a website that starts refusing your connection after a certain amount of time

I am currently creating a program that, based on constantly changing variables, connects to a website, and gathers information. It must connect to the website up to 400 times. The subject website seems to display a blank screen after a certain amount of connections, about 10-30. Does anyone know the best way find how long to wait between connections?
public static String pullString(int id) {
return null;
}
I can't get there from work, but google runescape api. They have one here, and I'll bet they expect you to use it.
Once it starts blocking you, does it eventually let you reconnect? You might be able to do some sort of algorithm to dynamically find how fast to try again.
You could consider something similar to what TCP congestion control does: start with some wait time between connections. When one completes successfully, decrease the wait time by a constant. When you get an error, double the wait time (or multiply by a constant).
It's very likely, though, that they're doing something more complicated than just rate-limiting connections. Without knowing what you have to get around, it's hard to know how to get around it.
If the website is giving you random block time after you reach certain limit it's almost impossible to find the best wait time. I think your best bet is to use a pool of http proxy to access the website in round robin way. Though that's not very nice... But technically it should be the best way to access a website programmatically if it blocks you after a certain amount of traffic.
Here is a link about how to use proxy: http://docs.oracle.com/javase/6/docs/technotes/guides/net/proxies.html
You can also use HttpClient which is simpler.
Try to google around and you can find a lot of free proxy server list.

Is there a way to read the inputstream for a specific amount of time?

I've a situation where a thread opens a telnet connection to a target m/c and reads the data from a program which spits out the all the data in its buffer. After all the data is flushed out, the target program prints a marker. My thread keeps looking for this marker to close the connection (successful read).
Some times, the target program does not print any marker, it keeps on dumping the data and my thread keeps on reading it (no marker is printed by the target program).
So i want to read the data only for a specific period of time (say 15 mins/configurable). Is there any way to do this at the java API level?
Use another thread to close the connection after 15 mins. Alternatively, you could check after each read if 15mins have passed and then simply stop reading and cleanup the connection, but this would only work if you're sure the remote server will continue to send data (if it doesn't the read will block indefinitely).
Generally, no. Input streams don't provide timeout functinality.
However, in your specific case, that is, reading data from a socket, yes. What you need to do is set the SO_TIMEOUT on your socket to a non-zero value (the timeout you need in millisecs). Any read operations that block for the amount of time specified will throw a SocketTimeoutException.
Watch out though, as even though your socket connection is still valid after this, continuing to read from it may bring unexpected result, as you've already half consumed your data. The easiest way to handle this is to close the connection but if you keep track of how much you've read already, you can choose to recover and continue reading.
If you're using a Java Socket for your communication, you should have a look at the setSoTimeout(int) method.
The read() operation on the socket will block only for the specified time. After that, if no information is received, a java.net.SocketTimeoutException will be raised and if treated correctly, the execution will continue.
If the server really dumps data forever, the client will never be blocked in a read operation. You might thus regularly check (between reads) if the current time minus the start time has exceeded your configurable delay, and stop reading if it has.
If the client can be blocked in a synchronous read, waiting for the server to output something, then you might use a SocketChannel, and start a timer thread that interrupts the main reading thread, or shuts down its input, or closes the channel.

Wrapping try-catch that fetches from internet, in a while

So I have a try-catch statement in a java program that fetches things from the internet. How do I handle timeouts? Would I just wrap the try catch in a while statement and after some number of failed iterations tell the user to try later?
How do I handle timeouts? Would I just wrap the try catch in a while statement and after some number of failed iterations tell the user to try later?
I don't think that would be a good idea. IMO, the best thing to do is to pick a timeout that corresponds to the time that you think that the user should have to wait, and not use a loop. As #BalusC points out, any decent Http client API will give you a way to set the timeout before you make the request. Use it.
The problem with using a loop is that you are potentially adding load to an already overloaded server. Suppose that the real reason for the timeout is that the server is trying to handle too many requests in parallel, and each request is taking a long time. If you (the client) time out a request and then immediately retry it, you are probably just adding extra load ... making things worse.
The chances are that some users will hammer the retry button anyway. You don't need to do the hammering for them.

Categories