I have an ObjectInputStream connected to an ObjectOutputStream through a socket, and I've been using Socket.setSoTimeout() to make ObjectInputStream.readObject() only block for 100ms. Since I started doing this I've been getting alot of StreamCorruptedError's while calling readObject(). Could the timeout be to blame?
I have a thread constantly getting new data through this function but I want to be able to stop it by setting a boolean to false. The thread has to keep polling the boolean and can't if it's blocked by readObject()
You can use Thread.interrupt to let it throw an InterruptedException, or in this case an InterruptedIOException. Make sure you don't swallow exceptions!
If you set the timeout shorter than the normal delays which might occur in reading a stream, you can expect the timeout to be in effect when the stream is still properly active.
100 ms seems like a long time, but not if there's disk or network traffic involved. Try timing out on something ridiculous, like a second.
Related
I am quite confused about socket.setSoTimeout( int ) method.
In scenario when i call
socket.setSoTimeout(4000);
try{
string data = input.read();
}catch (InterruptedIOException e) {
}
when calling setSoTimeout() , does it pauses the sokcet and resumes after 4000 milliseconds? Or it will just completely block all reading from socket and if anything attempts to read from it while setSoTimeout is still active it will throw exception?
If the latest , why is this usefull at all? By documentation after timeout expired the exception is thrown automaticlly.
Thanks for clarification.
The key part of the documentation for Socket.setSoTimeout() is:
Enable/disable SO_TIMEOUT with the specified timeout, in milliseconds. With this option set to a non-zero timeout, a read() call on the InputStream associated with this Socket will block for only this amount of time.
This is saying that a read on the socket will be prevented from blocking any longer than the specified time (which is perhaps more clear when interpreted in light of the meaning of "timeout", and is certainly more clear if you are familiar with the system-level socket interface). It does not say that a read is guaranteed to block for that long, which indeed would be of questionable utility.
Among the problems solved by setting a timeout is that of handling clients that are uncleanly disconnected without closing the connection. The local machine has no way to detect that that has happened, so without a timeout, an attempt to read from a socket connected to such a client will block indefinitely.
I think,setSotimeout denotes the amount of time a server can wait for a response to read.if timeout value exceeds ,exception will be thrown.
for example.If you set setSotimeout(4000) to socket,
Socket will wait for only 4 secs for the receiver to respond,it throws exception after 4 secs.
It will be useful in slow connection networks or bad servers.
It avoids waiting for response.
I have a java socket calling a server. However, I do not know at which address I can reach the server, so I put several sockets in several threads and they try to reach the server each on one address. My probem is that I do not want to wait for the timeout but have no idea how to stop the sockets and their threads properly.
Code:
socket = new Socket();
socket.connect(endpoint, timeout); // **Blocking method**
OutputStream out = socket.getOutputStream();
//Write Data here
How can I interrupt the operation? I consider Thread.stop() a bad style and it also does not work properly. .NET Tcp Endpoints have a non-blocking pending method that allows uinsg boolean flags but I could not find something similiar
I do not know at which address I can reach the server, so I put
several sockets in several threads and they try to reach the server
each on one address.
BAD. BAD Decision. Perform some logical step to determine the server's address. Or, perform something that helps you know about the server's IP-Address.
Do this way, only if it is the last hope.
My problem is that I do not want to wait for the
timeout but have no idea how to stop the sockets and their threads
properly.
You don't have any other option that timeout. Socket.connect() is blocking. You can't do anything than waiting.
You've to wait for timeout because that is the logical way to close the socket object created. You can't just do close directly, until a timeout. Reduce the timeout to the limit when your result should come(connection should be accepted).
How can I interrupt the operation? I consider Thread.stop() a bad
style and it also does not work properly.
Yes, you should not perform Thread.stop() or Thread.interrupt(). These are bad programming styles.
If the timeout expires, make the close() operation on socket.
You should set a socket timeout for the client-socket. It is the best-practice to set a timeout for sockets. The timeout should be around 10 seconds to more depending on the needs.
You can set the timeout in your current code by calling
socket.setSoTimeout(timeout); for reading timeout, OR
for connect timeout, connect(endpoint,timeout) as you've done in your code.
If the timeout expires, a java.net.SocketTimeoutException is raised, though the Socket is still valid. The timeout must be > 0. A timeout of zero is interpreted as an infinite timeout.
You're probably not using a try-catch-finally in your code. That'd be a better design here.
As you're doing a connect timeout, so your code can be amended to exit the blocking method like as shown below :
try{
socket = new Socket();
socket.connect(endpoint,timeout); // **Blocking method**
OutputStream out = socket.getOutputStream();
//Write Data here
}
catch(Exception e){
e.printStackTrace();
}
finally{
socket.close();
}
I'm using a ObjectStream over a TCP connection to send data from a client to a server. Sometimtes the client is terminated while the server still waits for new data. In these cases readObject() is staying blocked without of throwing a Exception and my computation stops.
How can i determine if the ObjectStream is disconnected or only waiting for more data?
Using a timeout is difficult because of long delays between communication.
The only safe way is to use a timeout. I suspect the long delay you are seeing in detecting a disconnect is due to the nature of the network you have.
Is it really a problem if computation of a dead connection has stopped. This may waste resources for a short period but you should detect a failure within minutes and clean resources then.
I've a situation where a thread opens a telnet connection to a target m/c and reads the data from a program which spits out the all the data in its buffer. After all the data is flushed out, the target program prints a marker. My thread keeps looking for this marker to close the connection (successful read).
Some times, the target program does not print any marker, it keeps on dumping the data and my thread keeps on reading it (no marker is printed by the target program).
So i want to read the data only for a specific period of time (say 15 mins/configurable). Is there any way to do this at the java API level?
Use another thread to close the connection after 15 mins. Alternatively, you could check after each read if 15mins have passed and then simply stop reading and cleanup the connection, but this would only work if you're sure the remote server will continue to send data (if it doesn't the read will block indefinitely).
Generally, no. Input streams don't provide timeout functinality.
However, in your specific case, that is, reading data from a socket, yes. What you need to do is set the SO_TIMEOUT on your socket to a non-zero value (the timeout you need in millisecs). Any read operations that block for the amount of time specified will throw a SocketTimeoutException.
Watch out though, as even though your socket connection is still valid after this, continuing to read from it may bring unexpected result, as you've already half consumed your data. The easiest way to handle this is to close the connection but if you keep track of how much you've read already, you can choose to recover and continue reading.
If you're using a Java Socket for your communication, you should have a look at the setSoTimeout(int) method.
The read() operation on the socket will block only for the specified time. After that, if no information is received, a java.net.SocketTimeoutException will be raised and if treated correctly, the execution will continue.
If the server really dumps data forever, the client will never be blocked in a read operation. You might thus regularly check (between reads) if the current time minus the start time has exceeded your configurable delay, and stop reading if it has.
If the client can be blocked in a synchronous read, waiting for the server to output something, then you might use a SocketChannel, and start a timer thread that interrupts the main reading thread, or shuts down its input, or closes the channel.
Here's a snippet of a web server that I am currently building...
// ...
threadPool = Executors.newCachedThreadPool();
while (true)
if(this.isOn) {
try { // listen for incoming connection
this.clientSocket = serverSocket.accept();
} catch (IOException e) {
System.err.println("LOG: >> Accept failed! ");
System.exit(1);
}
// as soon as a connection is established send the socket
// with a handler/processor to the thread pool for execution
threadPool.execute(new ClientRequestProcessor(clientSocket));
}
//...
Please note that the isOn variable is a VOLATILE boolean.
If I turn the if into a while... this code works... but as it is it doesn't. May I ask why? From a logical point of view both should work, even if I test that flag in an if... am I missing something?!
[Later edit:] By not working I mean... a browser (e.g. firefox) cannot connect, actually it keeps trying but timesout eventually. Again, if I change that if(isOn) into a while(isOn) it works like a charm.
Any suggestions/ideas are more than welcome!!!
P.S. I need this combo "while(true) if/while(test flag) {...}" because the server can be started/stopped from a GUI... so the top level while(true) is kind of needed so I can recheck whether I am on (and thus listening for connections) or if I am off (and don't really care about incoming connections). Needles to say the event handlers of the GUI can modify the flag at anytime.
A better solution is to close the server socket when you want it to stop and start a new one in a new thread when you want it to start. This way you reject new connections and don't consume CPU when its not doing anything.
When isOn == true and you turn it false, it will only not accept connections after the next new connection. (Could be any time later) Additionally any client new connections will just wait for the accept to be called (or eventually timeout) You can have up to 50 connections waiting to be accepted by default.
When isOn == false, your thread will busy wait, consuming a CPU. I suggest you put in a little delay such as Thread.sleep(250). This will cut CPU dramatically, but not delay starting again too much.
BTW:
if you get an exception, you should log/print it out. Otherwise when it fails you won't know why.
If accept fails, it could be the process is out of files, so you doesn't want it to just die, killing all existing connections.
If you have the while(true) and then if(this.isOn) the while loop has no way of stopping. What happens when the isOn flag is turned to false. The while loop never stops because it is essentially made to be infinite. Plug in an else statement to make it break and it should work as expected.
If you take out the if statement and just make it while(this.isOn) then when the isOn flag is turned to false the loop ends. No infinite loop.
Those are my thoughts at first glance ...
My assumption would be based on your tight loop there. Is it possible you have multiple instances of the program running on your server? The if(isOn) version will not shut down if you set isOn to false, instead it will simply loop forever burning your CPU.