Python Socket Client wait time - java

How do I make a socket wait for only a certain time before moving on. I have a client that expects packet from a broadcast receiver in android but it waits forever whenever it doesnt get the packet over the socket. Is it possible to time out a particular receive request in python to maybe 5 secs?
It maybe trivial but I am coming from a java background
def obtain_packet(self):
datarecevied = self.GetPCSocket().recv(4096)
print datarecevied
return str(datarecevied.strip())

From the docs:
socket.settimeout(value)
Set a timeout on blocking socket operations. The value argument can be a nonnegative float expressing seconds, or None. If a float is given, subsequent socket operations will raise a timeout exception if the timeout period value has elapsed before the operation has completed. Setting a timeout of None disables timeouts on socket operations. s.settimeout(0.0) is equivalent to s.setblocking(0); s.settimeout(None) is equivalent to s.setblocking(1).

Related

Ensuring socket connectivity with no control of underlying protocol

I have a java application which manages several socket connections to devices. I have no control over the protocol which these devices implement, and now I want my java application to send heartbeats for each device. The devices do not send data, but only respond to commands.
The javadoc for InputStream.read() states that if the end of stream is reached, it will return -1. So that seems like a reasonable way to check if the connection is open. But when I implement this solution, there are no bytes available (since the device only responds to commands), and since the connection is open, it will hang at the read call forever. Example, I peek at one byte and if that would be -1 the heartbeat would be "unhealthy":
public static void main(final String[] args) throws IOException {
try (Socket socket = new Socket()) {
socket.connect(new InetSocketAddress("192.168.30.99", 25901), 1000);
System.out.println("Connected");
final BufferedInputStream bis = new BufferedInputStream(socket.getInputStream());
bis.mark(1);
System.out.println(bis.read()); // Stalls forever here
bis.reset();
System.out.println("Done");
}
}
Is it reasonable to say that, if no byte is received within x milliseconds, the device is connected?
Is there any surefire way to check socket connectivity without heartbeats where the ip and port is important?
Is there any surefire way to check socket connectivity without
heartbeats where the ip and port is important?
No, you can't reliably know if the other end is alive unless you try to communicate with it.
If the other end doesn't have a no-op ping function, you're pretty much out of luck. Waiting in a blocking read() call won't help you if the connection gets cut off.
Is it reasonable to say that, if no byte is received within x
milliseconds, the device is connected?
No. It means that the device hasn't sent anything in x milliseconds. Which is normal, as it only responds to commands.
when the other end of socket do not write any byte and wait to read from socket first, blocking on read is the default behavior.
with no control over the protocol , little can be done.
it is reasonable to say, successful connect is a weaker heartbeat.
you don't have to wait for x miliseconds which makes no difference on such protocol
another tricky way , you can try to send a few bytes that most unlikely being a valid command,
for example the '\0' or '\n' ,
hoping that it will do no harm to the device and the device can close socket actively on such invalid command.
when the other end closes socket actively , read call on such socket should return -1
the better heartbeat way always have something to do with the protocol,
as the no-op ping command suggested by #Kayaman
Maybe TCP level keep-alive is solution for you:
You can turn it on by using command:
socket.setKeepAlive(true);
It sets SO_KEEPALIVE socket option. Quote from SocketOptions java-API:
When the keepalive option is set for a TCP socket and no data has been
exchanged across the socket in either direction for 2 hours (NOTE: the
actual value is implementation dependent), TCP automatically sends a
keepalive probe to the peer. This probe is a TCP segment to which the
peer must respond. One of three responses is expected: 1. The peer
responds with the expected ACK. The application is not notified (since
everything is OK). TCP will send another probe following another 2
hours of inactivity. 2. The peer responds with an RST, which tells the
local TCP that the peer host has crashed and rebooted. The socket is
closed. 3. There is no response from the peer. The socket is closed.
The purpose of this option is to detect if the peer host crashes.
Valid only for TCP socket: SocketImpl
You could also use SO_TIMEOUT by using:
socket.setSoTimeout(timeout);
Enable/disable SO_TIMEOUT with the specified timeout, in milliseconds.
With this option set to a non-zero timeout, a read() call on the
InputStream associated with this Socket will block for only this
amount of time. If the timeout expires, a
java.net.SocketTimeoutException is raised, though the Socket is still
valid. The option must be enabled prior to entering the blocking
operation to have effect. The timeout must be > 0. A timeout of zero
is interpreted as an infinite timeout.
Call those right after connect() or accept() calls, before the program enters to
'no control of underlying protocl' -state.

javax.ws.rs Client Timeout

So I know there are 2 timeouts that can be set (namely the READ and CONNECT) timeouts but neither of those are able to satisfy what I need.
I want to simply abort the connection if it has been up for TIMEOUT milliseconds.
I experimented by making the call to an endpoint that simply sleeps for 5000ms. I set the READ_TIMEOUT to 2000ms which should work because the connection doesn't send bytes for the amount of time it is asleep but the timeout never occurs. This leads me to believe there is some heartbeat bytes being sent?
Is there a way to close this connection after X amount of time?
Thats all I really need lol
Thanks,

JAVA : When to use Socket setSoTimeout?

I am making an application where the client sends a message to the server and then waits for 5 seconds (lets assume) for the server to respond and if there is no return message, it retries again. If the server responds with the message then the client process it. This goes on in loop and again happens after sometime.
For this purpose I was thinking to use setSoTimeout(time) on the Client Socket but after reading the javadoc and a lot of explanations on the internet I am confused as to whether this approach is right.
What I read on the internet
(1) If I use setSoTimeout on the socket then it gives the timeout for the duration in which the connection needs to be established and if it is not established then it retries to establish the connection for the given time.
(2) If I use setSoTimeout on the socket then it waits for incoming messages for the specified time interval and if no message is received then it stops waiting.
My questions are -
(1) Which of the above are true ?
(2) If the second statement is true, then can I use it for my implementation ?
(3) If the second statement is true, when does the timeout timer kickoff exactly ? Is it when I declare the socket and set the timeout period on it or is it when I send the message ?
If either of the explanation don't apply to my case then what is it that I should do to wait for a fixed interval of time on the client side for the server to reply ? If the reply does come I should process it and move on and redo the same process. If the reply doesn't come I should move ahead and redo the whole process again.
(1) If I use setSoTimeout() on the socket then it gives the timeout for the duration in which the connection needs to be established and if it is not established then it retries to establish the connection for the given time.
This is incorrect. setSoTimeout() does not cause re-establishment of the connection at all, let alone 'for the given time'.
(2) If I use setSoTimeout() on the socket then it waits for incoming messages for the specified time interval and if no message is received then it stops waiting.
This is slightly more accurate, but there is no such thing as a message in TCP.
The correct explanation is that it blocks for up to the specified timeout for at least one byte to arrive. If nothing arrives within the timeout, a SocketTimeoutException is thrown.
(1) Which of the above are true?
Neither.
(2) If the second statement is true, then can I use it for my implementation?
It isn't, so the second part doesn't apply, but if any statement is true you can use it as part of your implementation. You don't have to ask.
(3) If the second statement is true, when does the timeout timer kickoff exactly?
When you call read().
Is it when I declare the socket and set the timeout period on it or is it when I send the message?
Neither.
If either of the explanation don't apply to my case then what is it that I should do to wait for a fixed interval of time on the client side for the server to reply?
Set a read timeout.

Netty - connectTimeoutMillis vs. ReadTimeoutHandler

From the Netty API Documentation
connectTimeoutMillis = "the connect timeout in milliseconds. 0 if disabled."
And
ReadTimeoutHandler = Raises a ReadTimeoutException when no data was read within a certain period of time.
From a client perspective, am I correct in interpreting the aforementioned as follows?
The client will attempt to connect to the host for up to "connectTimeoutMillis". If a connection is established, and a ReadTimeoutHandler is NOT added to the Pipeline, a Channel can wait on a response indefinitely. If a ReadTimeoutHandler was added to the Pipeline, a ReadTimeoutException will be raised once timeoutSeconds has elapsed.
Generally speaking, I'd like to only attempt to connect to a host for up to 'x' seconds, but if a request was sent across the wire, I'd like to wait up to 'y' seconds for the response. If it shapes/influences the answer, the client is Netty, but the server is not.
Follow-up: Is timeoutSeconds on the ReadTimeoutHandler the timeout between successive bytes read, or for the entire request/response? Example: If timeoutSeconds was 60, and a single byte (out of a total of 1024) was read every 59 seconds, would the entire response be read successfully in 60416 seconds, or would it fail because the total elapsed time exceeded 60 seconds?
ReadTimeoutHandler doesn't understand the concept of a response. It only understands either a messageReceived event in Netty 3, or an inboundBufferUpdated event in Netty 4. Speaking from an NIO perspective the exact impact of this behaviour depends on where ReadTimeoutHandler is in your pipeline. (I've never used OIO so can't say if the behaviour is exactly the same).
If ReadTimeoutHandler is below any frame decoder in your pipeline (ie closer to the network) then the behaviour you describe is correct - a single byte read will reset the timer and, as you've identified, could result in the response taking a long time to be read. If you were writing a server this could be exploited to form a denial of service attack at the cost of very little effort on behalf of the attacker.
If ReadTimeoutHandler is above your frame decoder then it applies to your entire response. I think this is the behaviour you're looking for.
Note that the ReadTimeoutHandler is also unaware of whether you have sent a request - it only cares whether data has been read from the socket. If your connection is persistent, and you only want read timeouts to fire when a request has been sent, you'll need to build a request / response aware timeout handler.
Yes, you have correctly identified the difference between connect timeout and read timeout. Note that whatever any documentation may say to the contrary, the default or zero connect timeout means about 60-70 seconds, not infinity, and you can only use the connect timeout parameter to reduce that default, not increase it.
Read timeout starts when you call read() and ends when it expires or data arrives. It is the maximum time that read() may block waiting for the first byte to arrive. It doesn't block a second time in a single invocation.

Handling network timeouts in Java

i have a java program connect to server through xot protocol.
My lib i use can handle connect timeout, but there is no method like setSoTimeout() to handle timeout when send & recv data.
so, anyone could suggest me some solution for this problem.
thanks
Quan
One option is to spawn a thread to do the writing and join(timeout) it. Likewise with reading from the connection. Obviously kill the thread (and treat the connection as in an indeterminate state) when the timeout expires (as opposed to the thread dieing).
'Socket.setSoTimeout()' should apply to recv as well. See its javadoc.
public void setSoTimeout(int timeout) throws SocketException
Enable/disable SO_TIMEOUT with the specified timeout, in milliseconds.
With this option set to a non-zero
timeout, a read() call on the
InputStream associated with this
Socket will block for only this amount
of time. If the timeout expires, a
java.net.SocketTimeoutException is
raised, though the Socket is still
valid. The option must be enabled
prior to entering the blocking
operation to have effect. The timeout
must be > 0. A timeout of zero is
interpreted as an infinite timeout.

Categories