I have an input stream coming form a blackbox (say B). All the messages coming in from this stream are serialized binary data and each message starts with a four byte int. Most of it is logging data and runs 24 hrs a day. I read these four bytes using readInt() method. Now, ocasionally, the main thread would exit with EOFException and crash the program.
After researching on this, I found that it happens when there are less than four bytes in the input stream at the time of readInt(). My guess is that the buffer is not filling in fast enough between successive reads. Some of the possible solutions I am thinking of include checking available() before reading (consumes too many cycles considering the amt of data) or restart when exception occurs (sounds like poor programing). If only I could block using readInt(), it would be the best way, I think. I've looked at implementation of readInt() but again it boils down to blocking with read().
Anyone knows of a better solution?
Any blocking call down the call hierarchy is "bound" to make all the calls up the chain blocking as along as both calls are part of same thread of execution. The readInt method of DataInputStream makes four calls to the read method of the underlying input stream which will surely block as long as the data is not made available hence your fear of "buffer doesn't fill in fast enough" doesn't seem to be logical.
I have encountered these kind of exceptions in cases where either the server process dies or or drops the connection in which case the client ends up reading a -1 and throws an exception. Are you gobbling any sort of exceptions in your client/server code? Do your logs show anything suspicious?
I believe that you are using DataInputStream. That class throws EOFException in situation when the stream which it wraps, returns -1 from the read() method (which actually blocks until input data is available).
I suppose, you should have a look why the main stream's read returns with -1 in the first place.
The basic InputStream interface requires blocking reads, the EOFException you get is thrown when the readInt() encounters the end of stream marker, since returning the incomplete int would be a bad idea it throws the End Of File Exception.
The EOFException is thrown because the other end of the stream reached its end,has been closed or is no longer connected. You should check if the blackbox terminates the connection.
Since the stream is network based your socket may have a timeout set, if this is the case try changing the SOTimeout value of the socket.
Related
I've seen the java doc, and it says that :
The number of bytes read, possibly zero, or -1 if the channel has reached end-of-stream
I wonder whether the '-1' means that the connection is closed?
If it is, then why there is an exception named ClosedChannelException it throws?
What's the difference between these two concepts?
Exceptions are usually used in situations where the usual flow of an application cannot continue and some special handling needs to be applied. Especially exceptions should never be used for control flow handling for events that are expected/usual.
In your situation the JavaDoc clearly states, that -1 is returned if the end of the stream has been reached. For example you read an image file over a stream and all bytes of the image have been read, then -1 is returned to notify your code that no more data is to be expected. This is a usual situation and part of the normal control flow. On the other hand the ClosedChannelException is thrown, if the channel was locally (i.e. by you) closed and you continue reading. This is unexpected. The data was not fully read and the application cannot continue as usual, as there is - in this example - no image to display.
Another reason - apart from mixing expected and unexpected program flows - for not using Exceptions for control flow in expected situations is performance. In Java Exceptions are a costly thing. An Exception is a pretty large object and collecting the current stacktrace takes some considerable amount of time.
UPDATE 2022-02-14:
Reading data from a remotely closed channel, after the receiving the available data, simply results in 0 bytes read and not in a ClosedChannelException. I adapted the answer accordingly. Thanks #user207421 for bringing this up.
When I was using regular Sockets, I could call getInputStream() and use available() to see how many bytes were available. I switched to SSLSocket, but now available() always returns 0 for some reason. When I read instead, I can still get data. How can I tell if there is data available in an SSLSocket so that I can service it without blocking if there is no data?
Notes:
I cannot call read() on the InputStream or the thread will block. I would like non-blocking in my implementation.
available() returns 0 even though there is data for SSLSocket's InputStream.
There is no way to do this. Your streams cannot tell you the length of the data without first decrypting it. available() will always return 0 for SSLSocket.
As mentioned in this chat, the reason you wanted to check for data is to prevent read() from blocking when called, so you can handle multiple connections on a single thread, instead of a Thread per Client system.
Instead, use a non-blocking alternative. java.nio currently doesn't have it's own SSL implementation of SocketChannel, but you can find one online (like here) or create your own.
With this system, you can register a Selector to every channel, and manage them all using the "selector thread". I wrote an example of how to use a selector here (scroll down to Using a Selector).
With non-blocking IO, you to handle multiple clients per thread, allowing you to scale up. This method of managing channels was brought up due to the C10k Problem
I assume you fixed your problem, but for those like me, I found a much easier solution. If you perform a read, then the available() method fills up for what was decrypted. How to use and abuse this? Read a single byte with a very low SoTimeout on your socket, if you catch a SocketTimeoutException, then the connection is empty, if not, prepend that byte you read to your future interpretation of the message. Until in.available() == 0 again, just roll with it.
You can use available() with inputStream of underlying Socket. This works in my case.
In the following scenario
ObjectOutputStream output = new ObjectOutputStream(socket.getOutputStream());
output.flush();
// Do stuff with it
Why is it always necessary to flush the buffer after initial creation?
I see this all the time and I don't really understand what has to be flushed. I kind of expect newly created variables to be empty unless otherwise is specified.
Kind of like buying a trash-can and finding a tiny pile of trash inside that came with it.
In over 15 years of writing Java on a professional level I've never once encountered a need to flush a stream before writing to it.
The flush operation would do nothing at all, as there's nothing to flush.
You want to flush the stream before closing it, though the close operation should do that for you it is often considered best practice to do it explicitly (and I have encountered situations where that did make a difference, where apparently the close operation did not actually do a flush first.
Maybe you are confused with that?
When you write data out to a stream, some amount of buffering will occur, and you never know for sure exactly when the last of the data will actually be sent. You might perform many rite operations on a stream before closing it, and invoking the flush()method guarantees that the last of the data you thought you had already written actually gets out to the file. Whenever you're done using a file, either reading it or writing to it, you should invoke the close()method. When you are doing file I/O you're using expensive and limited operating system resources, and so when you're done, invoking close()will free up those resources.
This is needed when using either ObjectInputStream and ObjectOutputStream, because they send a header over the stream before the first write is called. The call to flush() will send that header to the remote side.
According to the spec, the header exists of the following contents:
magic version
If the header doesn't arrive at the moment a ObjectInputStream is build, this call will hang until it received the header bytes.
This means that if the protocol in question is written with ObjectStreams, it should flush after creating a ObjectOutputStream.
I was always wondering: What is the end of a stream?
In the javadoc of most readLine methods in the java.io package, you can read that "this returns null if the end of the stream is reached" - though I never actually got a null, as most streams (in the case of a network stream that I use most often) just block the program execution until something is written into the stream on the remote end
Are there ways to enforce this acutal behavior happening in an actual non-exception throwing way? I am simply curious ...
Think of a file being read. There is an end of stream there, the end of the file. If you try to read beyond that, you simply can't. If you have a network connection though, there doesn't need to be an end of stream if you simply wait for more data to be sent.
In the case of the file, we know for a fact that there is no more data to be read. In the case of a network stream we (usually) don't.
Blocking a FileReader when no more data is available, awakening when there is: the simple answer is: you can't. The fundamental difference is that you read a file actively, but when you listen to a network stream you read passively. When something comes from the network your hardware sends a short of signal to the Operating System, which then gives the new data to your JVM, and the JVM then awakens your process to read the new data (so to speak). But we don't have that with a file, at least not immediately.
A possible workaround would be to make a wrapper to the StreamReader you have, with a listener that is notified when the file is changed, which then awakens you to read further. In Java 7 you can use the WatchService.
At some point, the socket will be closed, and no more data can be sent via that stream. This is when the InputStream will signal EOF by returning -1 from read() and its overloads. This state is irreversible. That stream is dead.
Simply blocking for more data on an open stream is not an EOF condition.
I never actually got a null, as most streams (in the case of a network stream that I use most often) just block the program execution until something is written into the stream on the remote end
No. You never got a null because the peer never closed the connection. That's what 'end of stream' means. It doesn't mean 'no more data for the time being'.
I have several questions-
1. I have two computers connected by socket connection. When the program executes
outputStream.writeInt(value);
outputStream.flush();
what actually happens? Does the program wait until the other computer reads the integer value?
2. How can I empty the outputStream or inputStream? Meaning, when emptying
the outputStream or inputStream, whatever is written to that stream gets removed.
(please don't suggest to do it by closing the connection!)
I tried to empty the inputStream this way-
byte[] eatup=new byte[20*1024];
int available=0;
while(true)
{
available=serverInputStream.available();
if(available==0)
break;
serverInputStream.read(eatup,0,available);
}
eatup=null;
String fileName=(String)serverInputStream.readObject();
Program should not process the line as nothing else is being written on the outputStream .
But my program executes it anyway and throws a java.io.OptionalDataException error.
Note: I am working on a client-server file transfer project. The client sends files to
the server. The second code is for server terminal. If 'cancel button' is pressed on server
end then it stops reading bytes from the serverInputStream and sends a signal(I used int -1)
to the client. When client receieves this signal it stops sending data to the server, but I've
noticed that serverInputStream is not empty. So I need to empty this serverInputStream so that
the client computer is able to send the server computer files again(That's why I can't manage a lock
from read method)
1 - No. On the flush() the data will be written to the OS kernel which will likely immediately hand it to the network card driver, which in turn will send it to the receiving end. In a nutshell the send is fire and forget.
2 - As Jeffrey commented available() is not reliable for this sort of operation. If doing blocking IO then as he suggests you should just use read() speculatively. However it should be said that you really need to define a protocol on top of the raw streams, even if it's just using DataInput/DataOutputStream. When using raw write/read the golden rule is one write != one read. For example if you were to write 10 bytes on one side and had a reading loop on the other there is no guarantee that one read will read all 10 bytes. It may be "read" as any combination of chunks. Similarly two writes of 10 bytes might appear as one read of 20 bytes on the receiving side. Put another way there is no concept of a "packet" unless you create a higher level protocol on top of the raw bytes to do packets. An example would be each send is prefixed by a byte length so the receiving side knows how much data to expect in the current packet.
If you do need to do anything more complicated than a basic apps I strongly encourage you to investigate some higher level libraries that have solved many of the gnarly issues of network IO. I would recommend Netty which I use for production apps. However it is quite a big leap in understanding from a simple IO stream to Netty's more event based system. There may be other libraries somewhere in the middle.