javax.ws.rs Client Timeout - java

So I know there are 2 timeouts that can be set (namely the READ and CONNECT) timeouts but neither of those are able to satisfy what I need.
I want to simply abort the connection if it has been up for TIMEOUT milliseconds.
I experimented by making the call to an endpoint that simply sleeps for 5000ms. I set the READ_TIMEOUT to 2000ms which should work because the connection doesn't send bytes for the amount of time it is asleep but the timeout never occurs. This leads me to believe there is some heartbeat bytes being sent?
Is there a way to close this connection after X amount of time?
Thats all I really need lol
Thanks,

Related

JAVA : When to use Socket setSoTimeout?

I am making an application where the client sends a message to the server and then waits for 5 seconds (lets assume) for the server to respond and if there is no return message, it retries again. If the server responds with the message then the client process it. This goes on in loop and again happens after sometime.
For this purpose I was thinking to use setSoTimeout(time) on the Client Socket but after reading the javadoc and a lot of explanations on the internet I am confused as to whether this approach is right.
What I read on the internet
(1) If I use setSoTimeout on the socket then it gives the timeout for the duration in which the connection needs to be established and if it is not established then it retries to establish the connection for the given time.
(2) If I use setSoTimeout on the socket then it waits for incoming messages for the specified time interval and if no message is received then it stops waiting.
My questions are -
(1) Which of the above are true ?
(2) If the second statement is true, then can I use it for my implementation ?
(3) If the second statement is true, when does the timeout timer kickoff exactly ? Is it when I declare the socket and set the timeout period on it or is it when I send the message ?
If either of the explanation don't apply to my case then what is it that I should do to wait for a fixed interval of time on the client side for the server to reply ? If the reply does come I should process it and move on and redo the same process. If the reply doesn't come I should move ahead and redo the whole process again.
(1) If I use setSoTimeout() on the socket then it gives the timeout for the duration in which the connection needs to be established and if it is not established then it retries to establish the connection for the given time.
This is incorrect. setSoTimeout() does not cause re-establishment of the connection at all, let alone 'for the given time'.
(2) If I use setSoTimeout() on the socket then it waits for incoming messages for the specified time interval and if no message is received then it stops waiting.
This is slightly more accurate, but there is no such thing as a message in TCP.
The correct explanation is that it blocks for up to the specified timeout for at least one byte to arrive. If nothing arrives within the timeout, a SocketTimeoutException is thrown.
(1) Which of the above are true?
Neither.
(2) If the second statement is true, then can I use it for my implementation?
It isn't, so the second part doesn't apply, but if any statement is true you can use it as part of your implementation. You don't have to ask.
(3) If the second statement is true, when does the timeout timer kickoff exactly?
When you call read().
Is it when I declare the socket and set the timeout period on it or is it when I send the message?
Neither.
If either of the explanation don't apply to my case then what is it that I should do to wait for a fixed interval of time on the client side for the server to reply?
Set a read timeout.

Android connection timeouts

In HttpURLConnection class there are two methods foe setting time outs.One is setConnectTimeout(t1) and other is setReadTimeout(t2).What is the relationship between t1 and t2? Should t1>t2?
setConnectTimeout() is a the timeout of the connection, meaning how long it takes to connect to the server
setReadTimeout() is a timeout for reading, meaning how long it takes to read from server, and if it takes longer than that it will timeout the read
In a way they don't have anything to do
setConnectTimeout(milliseconds):Sets the maximum time in milliseconds to wait while connecting.
setReadTimeout(milliseconds):Sets the maximum time to wait for an input stream read to complete before giving up .

URLConnection, why two different timeouts? (connect and read) [duplicate]

This question already has answers here:
What is the difference between connection and read timeout for sockets?
(2 answers)
Closed 8 years ago.
Just curiosity. Is there a good reason why the class URLConnection needs to have two different timeouts?
The connectTimeout is the maximum time in milliseconds to wait while connecting. Connecting to a server will fail with a SocketTimeoutException if the timeout elapses before a connection is established.
The readTimeout is the maximum time to wait for an input stream read to complete before giving up. Reading will fail with a SocketTimeoutException if the timeout elapses before data becomes available.
Can you give me a good reason why these two values should be different? Why a call would need more time for performing the connection rather than receiving some data (or viceversa)?
I am asking this because I have to configure these values and my idea is to set the same value for both.
Let's say server is busy and is configured to accept 'N' connection and all the connections are long runner and all of sudden you send in request, What should happen? Should you wait indefinitely or should you time out? That's connectTimeout.
While let's say your server turns brain dead service just accepting connection and doing nothing with it (or say server synchronously goes to db and does some time taking activity and server ends up with deadlock for e.g.) and on the other hand client keeps on waiting for the response, in this case what should client do? Should it wait indefinitely for response or should it timeout? That's read timeout.
The connection timeout is how long you're prepared to wait to get some sort of response from the server. It's not particularly related to what it is that you're trying to achieve.
But suppose you had a service that would allow you to give it a large number, and have it return its prime factors. The server might take quite a while to generate the answer and send it to you.
You might well have clear expectations that the server would quickly respond to the connection: maybe even a delay of 5 seconds here tells you that the server is likely to be down. But the read timeout might need to be much higher: it might be a few minutes before you get to be able to read the server's answer to your query.
The connect time-out is the time-out in which you want a (in normal situations TCP) connection to be established. The default time-outs as specified in the internet RFCs and implemented by the various OSes are normally in the minute(s) range. But we know that if a server is available and reachable, it will respond in a matter of milli-seconds and otherwise not at all. A normal value would be a couple of seconds at a maximum.
The read timeout is the time in which the server is expected to respond after it received the incoming request. Read time-outs therefore depend on time within you expect the server to deliver the result. These are depending on the type of the request you are making and should be larger if the processing requires some time or the server may be very busy in some situations. Especially if you do a retry after a read time-out, it is best to put the read time-outs not too low, normally a factor 3-4 times the expected time.

Netty - connectTimeoutMillis vs. ReadTimeoutHandler

From the Netty API Documentation
connectTimeoutMillis = "the connect timeout in milliseconds. 0 if disabled."
And
ReadTimeoutHandler = Raises a ReadTimeoutException when no data was read within a certain period of time.
From a client perspective, am I correct in interpreting the aforementioned as follows?
The client will attempt to connect to the host for up to "connectTimeoutMillis". If a connection is established, and a ReadTimeoutHandler is NOT added to the Pipeline, a Channel can wait on a response indefinitely. If a ReadTimeoutHandler was added to the Pipeline, a ReadTimeoutException will be raised once timeoutSeconds has elapsed.
Generally speaking, I'd like to only attempt to connect to a host for up to 'x' seconds, but if a request was sent across the wire, I'd like to wait up to 'y' seconds for the response. If it shapes/influences the answer, the client is Netty, but the server is not.
Follow-up: Is timeoutSeconds on the ReadTimeoutHandler the timeout between successive bytes read, or for the entire request/response? Example: If timeoutSeconds was 60, and a single byte (out of a total of 1024) was read every 59 seconds, would the entire response be read successfully in 60416 seconds, or would it fail because the total elapsed time exceeded 60 seconds?
ReadTimeoutHandler doesn't understand the concept of a response. It only understands either a messageReceived event in Netty 3, or an inboundBufferUpdated event in Netty 4. Speaking from an NIO perspective the exact impact of this behaviour depends on where ReadTimeoutHandler is in your pipeline. (I've never used OIO so can't say if the behaviour is exactly the same).
If ReadTimeoutHandler is below any frame decoder in your pipeline (ie closer to the network) then the behaviour you describe is correct - a single byte read will reset the timer and, as you've identified, could result in the response taking a long time to be read. If you were writing a server this could be exploited to form a denial of service attack at the cost of very little effort on behalf of the attacker.
If ReadTimeoutHandler is above your frame decoder then it applies to your entire response. I think this is the behaviour you're looking for.
Note that the ReadTimeoutHandler is also unaware of whether you have sent a request - it only cares whether data has been read from the socket. If your connection is persistent, and you only want read timeouts to fire when a request has been sent, you'll need to build a request / response aware timeout handler.
Yes, you have correctly identified the difference between connect timeout and read timeout. Note that whatever any documentation may say to the contrary, the default or zero connect timeout means about 60-70 seconds, not infinity, and you can only use the connect timeout parameter to reduce that default, not increase it.
Read timeout starts when you call read() and ends when it expires or data arrives. It is the maximum time that read() may block waiting for the first byte to arrive. It doesn't block a second time in a single invocation.

Network listener in Java

I want to check when the internet goes off can i capture that event .I am not getting the proper API or any example which would explain the same .
I am using socket for (TCP)communication and I open a socket when the network is available. I have observed that the socket does not give any exception in case the network goes off.
If any one had done or any example links would be really helpful Thanks in advance
The problem is that no event 'network down' exists in tcp connections, they just go down.
As suggested by Jerome you should check if timeout is reached.
Of course if network goes down you won't receive packets neither be able to send them so the underlying InputStream and OutputStream will throw an IOException but just when they'll realize that network is not working properly (usually 2*rtt = 120 seconds, it depends how TCP layer is managed).
Look state diagram by yourself:
What typically happens is that when in ESTABLISHED your socket will send data over the socket while waiting for ACK from destination. ACK won't come since network went off so your socket's window fills up and socket starts resending packets until real timeout intervenes throwing the exception.
Another case is when network goes off and your socket realizes that it cannot write anymore on channel: it will throw an exception imediately upon calling outStream.write(...).
It's not that easy to tell whether the network is off or just slow.
If you set Timeouts, it will throw exception if it takes too long:
For sockets:
socket.setSoTimeout(CONNECTION_TIMEOUT);
For HttpURLConnections:
HttpURLConnection con = (HttpURLConnection)url.openConnection();
con.setConnectTimeout(CONNECTION_TIMEOUT);
con.setReadTimeout(CONNECTION_TIMEOUT);
TCP is designed to be quiet when idle. There is no administrative packets on wire when there is no pending packet. If the connection is dead while idle, you will not know, no matter what the setting of the timeout is. It does have keepalives but it's pretty much useless at the recommended frequency of 2 hours and longer.
You need to build some heartbeat or keepalive in your application protocol to detect stale connections. Keepalive is nothing but a noop packet sent at regular interval to trigger TCP timeout when connection is down. In my app, I do this every 10 seconds.
Why don't you try pinging www.google.com
See http://java.sun.com/j2se/1.5.0/docs/guide/nio/example/Ping.java

Categories