Following is a part of the code snippet that I will be using for my project.
public String fetchFromStream()
{
try
{
int charVal;
StringBuffer sb = new StringBuffer();
while((charVal = inputStream.read()) > 0) {
sb.append((char)charVal);
}
return sb.toString();
} catch (Exception e)
{
m_log.error("readUntil(..) : " + e.getMessage());
return null;
} finally {
System.out.println("<<<<<<<<<<<<<<<<<<<<<< Called >>>>>>>>>>>>>>>>>>>>>>>>>>>");
}
}
Initially the while loop start working pretty fine. But after the probable last character is read from the stream I was expecting to get -1 return value. But this is where my problem starts. The code gets hanged, even the finally block is not executed.
I was debugging this code in Eclipse to see what is actually happening during the run-time. I set a pointer (debug) inside the while loop and was constantly monitoring the StringBuffer getting populated with char values one by one. But suddenly while checking the condition inside the while loop, the debugging control is getting lost and this is the point where the code goes to hangup state !! No exception is thrown as well !!
What is happening here ?
Edit::
This is how I'm getting my InputStream. Basically I'm using Apache Commons Net for Telnet.
private TelnetClient getTelnetSession(String hostname, int port)
{
TelnetClient tc = new TelnetClient();
try
{
tc.connect(hostname, port != 0 ? port : 23);
//These are instance variables
inputStream = tc.getInputStream();
outputStream = new PrintStream(tc.getOutputStream());
//More codes...
return tc;
} catch (SocketException se)
{
m_log.error("getTelnetSession(..) : " + se.getMessage());
return null;
} catch (IOException ioe)
{
m_log.error("getTelnetSession(..) : " + ioe.getMessage());
return null;
} catch (Exception e)
{
m_log.error("getTelnetSession(..) : " + e.getMessage());
return null;
}
}
Look at the JavaDocs:
Reads the next byte of data from the input stream. The value byte is returned as an int in the range 0 to 255. If no byte is available because the end of the stream has been reached, the value -1 is returned. This method blocks until input data is available, the end of the stream is detected, or an exception is thrown.
In simple turns: if your stream ended (e.g. end of file), read() returns -1 immediately. However if the stream is still open but JVM is waiting for data (slow disk, socket connection), read() will block (not really hung).
Where are you getting the stream from? Check out the available() - but please do not call it in a loop exhausting CPU.
Finally: casting int/byte to char will only work for ASCII characters, consider using Reader on top of InputStream.
read the docs
read() will wait until there is more data on the InputStream if the InputStream is not closed.
I suspect you are doing this with sockets? This is the most common area where this comes up.
"Reads the next byte of data from the input stream. The value byte is returned as an int in the range 0 to 255. If no byte is available because the end of the stream has been reached, the value -1 is returned. This method blocks until input data is available, the end of the stream is detected, or an exception is thrown"
I have the same issue with the Apache Commons on Android ...
the read() command on the inputstream hangs forever for some reason. And no, it is not just blocking "until data is available" ...
my debugging information shows that there are several 100 chars available() ... yet it just randomly blocks at some read. However, whenever I send something to the telnet server the block is suddenly released and it will continue reading for several chars until it suddenly stops/blocks again at some arbitrary point!
I believe there is some bug within the Apache Commons library! This is really annoying because there isn't a lot that can be done ... no timeout for the read command or anything else ...
EDIT: I was able to get around it ... by setting the TelNetClient.setReaderThread(false) ... obviously there is a bug within the Library that exists as long as a thread handles the input data ... when dispabled it works just fine for me!
Related
I want to try to read from a socket's input stream for a certain time and then do something else if i don't receive any input
i know i can set a timeout for the socket that will do that
mySocket.setSoTimeout(200);
but will it still work (will the InputStream still throw an exception to my catch block) even if i incapsulate it in ObjectInputStream
public void startClient(){
try {
connection = new Socket(InetAddress.getByName(hostName), port);
output = new ObjectOutputStream( connection.getOutputStream());
output.flush();
input = new ObjectInputStream( connection.getInputStream());
}
catch (IOException ex) {
Logger.getLogger(Client.class.getName()).log(Level.SEVERE, null, ex);
}
ExecutorService worker = Executors.newSingleThreadExecutor();
worker.execute( this );
}
and is there any other way of doing it if this doesn't work.
plus if the stream starts reading and it times out will it stop in the middle of it or will it continue until there are no more bytes in the stream
as in: will the stream timeout after it starts reading the object if the 200ms pass?
i want to do something like this
while(!connection.isClosed()){
try{
com = (String) input.readObject();
if(com.equals("TERMINATE CONNECTION")){
closeConnection();
}else if(com.equals("SEND DATA")){
sendData();
}
}catch(timeout exception){
if( timedout and want to do something){ do something else ....}
}
com="";
}
thanks everybody
If I set a timeout on the socket's input stream
You don't 'set a timeout on the socket input stream'. You set the timeout on the socket itself.
will it still work even if I cast that stream to another type?
There is no casting to another type here. You are wrapping the socket input stream in another type. There is no way the socket read timeout can possibly be affected by that, and no way for the socket to even know that you've done it.
In short the question doesn't make sense.
plus if the stream starts reading and it times out will it stop in the middle of it or will it continue until there are no more bytes in the stream
I can't make head or tail of this either, but if you get a timeout it means no data arrived within the timeout periods. It doesn't break the connection. It might however break the ObjectInputStream, if you somehow get a timeout in the middle of reading an object.
NB:
The timeout manifests itself as a SocketTimeoutException, not as something you detect in an if-else chain.
It isn't correct to loop while connection.isClosed() returns false. It doesn't magically become true when the peer closes the connection. In this case the correct technique is to loop until you get an EOFException.
I have a synchronize method, in whoich i am using datainputstream.readfully() .Its throwing me the "EOF exception". Why the reallyfully method throws EOF when it is still inside the synchronize method? below is the code for reference
private static synchronized String getTransactionId() {
try {
String txnId_fname = SiteConfiguration.getInstance().getProperty("TRANSACTION.INFO_FILE", //
LaneProcessor.DEFAULT_TRANSACTION_ID_FILE_NAME);
File tmpFile = new File(txnId_fname);
if (!tmpFile.exists()) {
tmpFile.createNewFile();
}
else {
long sz = tmpFile.length();
if ( 12 == sz ) {
// read the transaction id from the file, the ID must be 12 bytes long to be valid.
DataInputStream dis = new DataInputStream(new FileInputStream(tmpFile));
byte[] datainBytes = new byte[dis.available()];
dis.readFully(datainBytes);
transactionIdLog = new String(datainBytes, 0, datainBytes.length);
if ( Stringer.isNumeric(transactionIdLog))
{
transactionId = Long.valueOf(transactionIdLog);
}
dis.close();
//log.debug("transaction id from the existing file"+transactionId);
}
}
transactionId = ConvertUtils.incrementLong(transactionId);
transactionIdLog = Long.toString(transactionId);
transactionIdLog = Stringer.zpad(transactionIdLog, 12);
_out = new FileOutputStream(tmpFile);
_out.write(transactionIdLog.getBytes());
_out.flush();
_out.close();
}
catch (Exception e) {
log.error("Error in transaction id generation" + e.getMessage(), e);
}
return transactionIdLog;
}
The contract for available is that it returns an estimate of the number of bytes available; if you try to read that many bytes, the program won't block but it may read fewer bytes than available says. If available's result is too high, then readFully could get an EOF exception. Unfortunately, I tried looking at the source of FileInputStream.available to see how it worked, but it's native, so I can't tell whether it could return a "too large" value. All I can say is, based on the javadoc, I don't think your code is guaranteed to work.
To see whether this really is the problem, I'd recommend having the program output datainBytes.length after the array is created, and then check that against the actual file size.
Will the synchronize method throw EOF Exception?
Literally No. Any exception in the method will be caught and logged. So it won't propagate an EOFException. What is more, there is no throw new EOFException(...).
But could your method catch EOFException and log it? I think the answer is Yes!
The readFully method will throws EOFException if it cannot fill the buffer, and you have set the buffer size to the number of bytes that available() says are readable. But consider this scenario:
Your application executes to the point where available() returns.
Your application is paused (e.g. by the OS scheduler).
Some other application truncates the file.
Your application is resumed, and calls readFully ... only to discover that there are ZERO bytes to be read.
EOFException ...
This illustrates the point that the result of isAvailable() is only a hint. You can't entirely rely on it.
But, I don't think it is technically possible to code that method in such a way that an EOFException cannot occur. You certainly can't do it without some kind of file locking ... to prevent other applications truncating the file while your application is reading it.
I have a Java program that reads data from a TCP source all works fine, except when my program (which acts as a client to the data source) is faster then then the source can respond BufferedReader.ready() throws an exception that closes my TCP connection, as it should. Is there any preferred way/method that I can keep the BufferedReader waiting for new input since my source can sometimes have a slight delay.
Here is the part that i am talking about:
public aDataServer(String host, int port, StreamConnection aConnection) throws UnknownHostException, IOException {
this.aConnection = aConnection;
ndataServerSocket = new Socket(Inet4Address.getByName(host),port);
ndataServerReader = new BufferedReader(new InputStreamReader(ndataServerSocket.getInputStream()));
}
public void run() {
try {
RemoteDevice dev = RemoteDevice.getRemoteDevice(aConnection);
OutputStream outputStream = aConnection.openOutputStream();
OutputStreamWriter osw = new OutputStreamWriter(outputStream);
do {
try {
String ndata = ndataServerReader.readLine();
osw.write(ndata+"\n");
osw.flush();
LOG.log(Level.INFO,"Sent");
} catch(IOException io) {
LOG.log(Level.SEVERE, "Client device({0}) disconnected: \n{1}", new Object[]{dev.getFriendlyName(true), io.getMessage()});
break;
}
}while(ndataServerReader.ready());
} catch (IOException ioe) {
LOG.severe(ioe.getMessage());
} finally {
try {
if (ndataServerSocket != null) {
ndataServerSocket.close();
}
if (ndataServerReader
!= null) {
ndataServerReader.close();
}
} catch (IOException ex) {
LOG.log(Level.SEVERE, ex.getMessage());
}
}
You shouldn't be using ndataServerReader.ready(). Your do/while loop (which should almost certainly just be a while loop) appears to assume that ndataServerReader.ready() indicates there's more data to be read, which is not what it's for.
The Javadoc for Reader describes the ready() method as:
Tells whether this stream is ready to be read.
Returns: True if the next read() is guaranteed not to block for input, false
otherwise. Note that returning false does not guarantee that the next
read will block.
In other worse, Reader.ready() will return false if the reader will wait for more data before returning if you attempt to read from it. This does not mean the Reader is done, and in fact you should expect this method to return false often when working with a network stream, as it could easily have delays.
Your code currently is likely reading one line (in the do block) then checking if the reader is ready (in the while), which it probably isn't, and then exiting successfully. No exceptions are being thrown - you'd see a SEVERE level logging message if they were.
Instead of using ready(), take advantage of the documented behavior of readLine() that says it returns:
A String containing the contents of the line, not including any
line-termination characters, or null if the end of the stream has been
reached
In other words, simply doing:
String ndata = reader.readLine();
while (ndata != null) {
osw.write(ndata+"\n");
osw.flush();
LOG.log(Level.INFO,"Sent");
ndata reader.readLine();
}
Is sufficient to read the whole input stream.
Reference reading: What's the difference between (reader.ready()) and using a for loop to read through a file?
I'm firing up an external process from Java and grabbing its stdin, stdout and stderr via process.getInputStream() etc. My issue is: when I want to write data to my output stream (the proc's stdin) it's not getting sent until I actually call close() on the stream. I am explicitly calling flush().
I did some experimenting and noticed that if I increased the number of bytes I was sending, it would eventually go through. The magic number, on my system, is 4058 bytes.
To test I'm sending the data over to a perl script which reads like this:
#!/usr/bin/perl
use strict;
use warnings;
print "Perl starting";
while(<STDIN>) {
print "Perl here, printing this: $_"
}
Now, here's the java code:
import java.io.InputStream;
import java.io.IOException;
import java.io.OutputStream;
public class StreamsExecTest {
private static String readInputStream(InputStream is) throws IOException {
int guessSize = is.available();
byte[] bytes = new byte[guessSize];
is.read(bytes); // This call has side effect of filling the array
String output = new String(bytes);
return output;
}
public static void main(String[] args) {
System.out.println("Starting up streams test!");
ProcessBuilder pb;
pb = new ProcessBuilder("./test.pl");
// Run the proc and grab the streams
try {
Process p = pb.start();
InputStream pStdOut = p.getInputStream();
InputStream pStdErr = p.getErrorStream();
OutputStream pStdIn = p.getOutputStream();
int counter = 0;
while (true) {
String output = readInputStream(pStdOut);
if (!output.equals("")) {
System.out.println("<OUTPUT> " + output);
}
String errors = readInputStream(pStdErr);
if (!errors.equals("")) {
System.out.println("<ERRORS> " + errors);
}
if (counter == 50) {
// Write to the stdin of the execed proc. The \n should
// in turn trigger it to treat it as a line to process
System.out.println("About to send text to proc's stdin");
String message = "hello\n";
byte[] pInBytes = message.getBytes();
pStdIn.write(pInBytes);
pStdIn.flush();
System.out.println("Sent " + pInBytes.length + " bytes.");
}
if (counter == 100) {
break;
}
Thread.sleep(100);
counter++;
}
// Cleanup
pStdOut.close();
pStdErr.close();
pStdIn.close();
p.destroy();
} catch (Exception e) {
// Catch everything
System.out.println("Exception!");
e.printStackTrace();
System.exit(1);
}
}
}
So when I run this, I get effectively nothing back. If immediately after calling flush(), I call close() on pStdIn, it works as expected. This isn't what I want though; I want to be able to continually hold the stream open and write to it whenever it so pleases me. As mentioned before, if message is 4058 bytes or larger, this will work without the close().
Is the operating system (running on 64bit Linux, with a 64bit Sun JDK for what it's worth) buffering the data before sending it? I could see Java having no real control over that, once the JVM makes the system call to write to the pipe all it can do is wait. There's another puzzle though:
The Perl script prints line before going into the while loop. Since I check for any input from Perl's stdout on every iteration of my Java loop, I would expect to see it on the first run through the loop, see the attempt at sending data from Java->Perl and then nothing. But I actually only see the initial message from Perl (after that OUTPUT message) when the write to the output stream happens. Is something blocking that I'm not aware of?
Any help greatly appreciated!
You haven't told Perl to use unbuffered output. Look in perlvar and search for $| for different ways to set unbuffered mode. In essence, one of:
HANDLE->autoflush( EXPR )
$OUTPUT_AUTOFLUSH
$|
Perl may be buffering it before it starts printing anything.
is.read(bytes); // This call has side effect of filling the array
No it doesn't. It has the effect of reading between 1 and bytes.length-1 bytes into the array. See the Javadoc.
I don't see any obvious buffering in your code, so it may be on the Perl side. What happens if you put a newline \n at the end of your print statement?
Note also that you can't, in general, read the stdin and stderr on the main thread like that. You'll be subject to deadlock - e.g., if the child process prints lots of stderr, while the parent is reading stdin, the stderr buffer will fill and the child process will block, but the parent will stay blocked forever trying to read stdin.
You need to use separate threads to read stderr and stding (also separate from the main thread, which here is used to pump input to the process).
I have some strange socket behavior going on. I've set an timeout of 5 seconds using setSoTimeout. This should be plenty of time in my situation. According to online java documentation a SocketTimeoutException should be thrown if it times out. It also says that the socket is still valid. So I want to catch it and then continue. However instead of the inner catch, the outer catch IOException is catching the expception and when I output to the log the details it says it was a SocketTimeoutException. Another perplexing thing is I change the timeout from 5 seconds to say, 15 seconds and log the amount of time it take for every read, the times are always in the milli-second range, never even close to a second. Any ideas are GREATLY appreciated.
ReadThread code snippet
#Override
public void run()
{
try
{
while (true)
{
byte[] sizeBuffer = new byte[BYTES_FOR_MESSAGE_SIZE];
int bytesRead = this.inputStream.read(sizeBuffer);
int length = 0;
for (int i = 0; i < BYTES_FOR_MESSAGE_SIZE; i++)
{
int bitsToShift = 8 * i;
int current = ((sizeBuffer[i] & 0xff) << bitsToShift);
length = length | current;
}
byte[] messageBuffer = new byte[length];
this.socket.setSoTimeout(5000); //5 second timeout
try
{
this.inputStream.read(messageBuffer);
}
catch(java.net.SocketTimeoutException ste)
{
Log.e(this.toString(), "---- SocketTimeoutException caught ----");
Log.e(this.toString(), ste.toString());
}
}
}
catch (IOException ioe)
{
Log.e(this.toString(), "IOException caught in ReadThread");
Log.e(this.toString(), ioe.toString());
ioe.printStackTrace();
}
catch (Exception e)
{
Log.e(this.toString(), "Exception caught in ReadThread");
Log.e(this.toString(), e.toString());
e.printStackTrace();
}
this.interfaceSocket.socketClosed();
}// end run
I agree with Brian. You are probably getting the timeout on the first read, not the second. The timeout once set remains in effect until you change it again.
Your second read call where you read the 'message' seems to assume (a) that it will read the entire message and (b) that it will timeout if the entire message doesn't arrive within 5s. It doesn't work like that. It will timeout if nothing arrives within 5s, or else it will read whatever has arrived, up to message.length. But it could only be one byte.
You should use DataInputStream.readFully() to read the entire message, and you need to completely reconsider your timeout strategy.
The exception is probably caught in the first try catch because of the earlier call to this.inputStream.read(). You have two of these calls: one in the outer try, one in the inner try.
Have you validated if data is being read? If data is being read then you should expect the read operation to return after a few milliseconds. If data is not being read, then the read operation should block there for the time you specify. Maybe this has to do with the order by which you setSoTimeout (perhaps doing it earlier will help).
Good luck,
B-Rad