Like the title; Does closing a FileChannel close the underlying file stream?
From the AbstractInterruptibleChannel.close() API docs you can read:
Closes this channel.
If the channel has already been closed then this method returns
immediately. Otherwise it marks the channel as closed and then invokes
the implCloseChannel method in order to complete the close operation.
Which invokes AbstractInterruptibleChannel.implCloseChannel:
Closes this channel.
This method is invoked by the close method in order to perform the
actual work of closing the channel. This method is only invoked if the
channel has not yet been closed, and it is never invoked more than
once.
An implementation of this method must arrange for any other thread
that is blocked in an I/O operation upon this channel to return
immediately, either by throwing an exception or by returning normally.
And that doesn't say anything about the stream. So in fact, when I do:
public static void copyFile(File from, File to)
throws IOException, FileNotFoundException {
FileChannel sc = null;
FileChannel dc = null;
try {
to.createNewFile();
sc = new FileInputStream(from).getChannel();
dc = new FileOutputStream(to).getChannel();
long pos = 0;
long total = sc.size();
while (pos < total)
pos += dc.transferFrom(sc, pos, total - pos);
} finally {
if (sc != null)
sc.close();
if (dc != null)
dc.close();
}
}
...I leave the streams open?
The answer is 'yes' but there's nothing in the Javadoc that actually says so. The reason is that FileChannel itself is an abstract class, and its concrete implementation provides the implCloseChannel() method, which closes the underlying FD. However due to that architecture and the fact that implCloseChannel() is protected, this doesn't get documented.
Related
From the PrintStream documentation:
Optionally, a PrintStream can be created so as to flush automatically;
this means that the flush method is automatically invoked after a byte
array is written, one of the println methods is invoked, or a newline
character or byte ('\n') is written.
Then given code
System.out.print("hi"); // gives console output: hi
System.out.print(7); // gives console output: 7
// prevents flushing when stream wiil be closed at app shutdown
for (;;) {
}
Why then I see output to my console? Nothing shall be written to console (PrintStream instance from System.out), because nothing shall be flushed so far!
This didn't answer this.
I guess, the answer is in the source code (private utility method BufferedWriter.flushBuffer()), but I don't understand the comment to code: "Flushes the output buffer to the underlying character stream, without flushing the stream itself": if PrintStream (which is tied to console output), which is "stream itself" is not flushed, output to console shall not be refreshed!...
Source for PrintStream.print(String):
private void write(String s) {
try {
synchronized (this) {
ensureOpen();
textOut.write(s);
textOut.flushBuffer();
charOut.flushBuffer();
if (autoFlush && (s.indexOf('\n') >= 0))
out.flush();
}
}
catch (InterruptedIOException x) {
Thread.currentThread().interrupt();
}
catch (IOException x) {
trouble = true;
}
}
Source for BufferedWriter.flushBuffer():
/**
* Flushes the output buffer to the underlying character stream, without
* flushing the stream itself. This method is non-private only so that it
* may be invoked by PrintStream.
*/
void flushBuffer() throws IOException {
synchronized (lock) {
ensureOpen();
if (nextChar == 0)
return;
out.write(cb, 0, nextChar);
nextChar = 0;
}
}
More details are also given here. It is very complicated, but seems like at some stage BufferedWriter is given to PrintStream constructor.
I went step by step using debugger and this is what I found:
String s is displayed in the console after 527th line, so it's before line 528 in which the check of having \n is done.
In charOut.flushBuffer() deep inside, there is the following method called:
In which, the check about \n is missing.
The flow is as it follows:
System.out#print(String s) calls PrintStream#print(String s).
PrintStream#print(String s) calls PrintStream#write(String s).
PrintStream#write(String s) calls OutputSteamWriter#flushBuffer().
OutputStreamWriter#flushBuffer() calls StreamEncoder#flushBuffer().
StreamEncoder#flushBuffer() calls StreamEncoder#implFlushBuffer().
StreamEncoder#implFlushBuffer() calls StreamEncoder#writeBytes().
StreamEncoder#writeBytes() calls PrintStream#write(byte buf[], int off, int len) which flushes the buffor if(autoFlush).
The most important snippets are above. The BufferedWriter seems not to be called in this flow.
https://bugs.openjdk.java.net/browse/JDK-8025883 describes this bug.
This bit me in a program that reads and parses a binary file, doing a lot of System.out.printf() calls, which took way longer that it should.
What I ended up doing was writing a helper class that violates the contract of Streams by not honoring every flush request:
class ForceBufferedOutputStream extends OutputStream {
OutputStream out;
byte[] buffer;
int buflen;
boolean haveNewline;
private static final int bufsize=16384;
public ForceBufferedOutputStream(OutputStream out) {
this.out=out;
this.buffer=new byte[bufsize];
this.buflen=0;
this.haveNewline=false;
}
#Override
public void flush() throws IOException {
if (this.haveNewline || this.buflen==bufsize) {
out.write(buffer, 0, buflen);
out.flush();
this.buflen=0;
this.haveNewline=false;
}
}
#Override
public void close() throws IOException {
out.close();
}
#Override
public void write(int b) throws IOException {
buffer[buflen++]=(byte)b;
if (b=='\n')
this.haveNewline=true;
if (buflen==bufsize)
this.flush();
}
}
then using a new PrintStream(new ForceBufferedOutputStream(System.out)) instead of System.out.
I consider this a horrible piece of software - as said, it violates the contract that flush() needs to make sure everything is written, and it could optimize array write calls. But in my case, runtime was cut from 17 minutes to 3:45, so if you need a copy/paste that speeds up a quick and dirty type of program, I hope it helps somewhat.
I was looking into a problem with .close() causing cut-off issues. The program is running on two different servers, but had the same cutoff issue. It appears that the log file is not flushing properly. So I decided to dig into the .close() source code. I don't see a .flush() being called. Am I missing something? Should we always call .flush()? According to this answer, it shouldn't matter: Using flush() before close()
What I'm calling:
private static void write_to_file(String incoming){
output_stream.write(incoming);
output_stream.write(System.lineSeparator());
}
Later on I call output_stream.close();
The source code:
/**
* Closes the stream and releases any system resources associated
* with it. Closing a previously closed stream has no effect.
*
* #see #checkError()
*/
public void close() {
try {
synchronized (lock) {
if (out == null)
return;
out.close();
out = null;
}
}
catch (IOException x) {
trouble = true;
}
}
Log file:
C:\apps\bot\log\processed\file.0000090.gz
C:\apps\bot\log\processed\file.0000091.gz
C:\apps\bot\log\process
As stated in the question you've correctly pointed out, calling close() on a stream is enough to flush whatever you've written to the stream. If the output is truncated, there are a few common pitfalls:
Your close() method is not called, e.g. if you put it in a catch block instead of finally ;)
Calling close() on a custom stream doesn't propagate the call to the underlying stream.
The problem can also be in encoding if you don't properly convert your String to bytes.
I have a class that implements 'Runnable' to read data from a data stream. The data comes from a Channel which is stored as a member variable in another of my classes, and I can get an instance of this channel by simply calling the getter getInputChannel(). Now, for my Runnable to read the data from the channel, it needs to know what type of channel it is so that it can use the channel's read method. The channel type may be one of either FileChannel or SocketChannel, and is decided at run time, i.e.,
private class ReadInputStream implements Runnable {
Thread thread;
boolean running = true;
ByteBuffer buffer = ByteBuffer.allocate(1024);
FileChannel or SocketChannel channel;
public ReadInputStream() {
// Need to cast type channel at run time
Channel ch = getInputChannel();
this.channel = (FileChannel or SocketChannel) ch;
}
public void run() {
while (running) {
channel.read(buffer);
// etc.
}
}
}
What is the best way to get the right type of channel so that I can implement its read method in the runnable's run() method?
Both FileChannel and SocketChannel implement ByteChannel which is what declares their read(ByteBuffer) method, so that's the type your getInputChannel() should return.
Edit Or if you only ever read from the channel, return a ReadableByteChannel as Darkhogg says. Since this is an input channel, this is most likely the case anyway.
There is no way in Java to express union types. Your best bet is to use some common interface that applies to both.
If you're using the channel only for reading, define it to be a ReadableByteChannel.
If you're using it for writing, use a WritableByteChannel.
If you need both, use ByteChannel.
You could use a simple if/else clause and cycle through the available instance types, for instance (no pun intended..):
if (channel instanceof FileChannel)
{
((FileChannel)channel).read(buffer);
}
else if (channel instanceof SocketChannel)
{
((SocketChannel)channel).read(buffer);
}
etc.
I have a Socket that I am both reading and writing to, via BufferedReaders and BufferedWriters. I'm not sure which operations are okay to do from separate threads. I would guess that writing to the socket from two different threads at the same time is a bad idea. Same with reading off the socket from two different threads at the same time. What about reading on one thread while writing on another?
I ask because I want to have one thread blocked for a long time on a read as it waits for more data, but during this wait I also have occasional data to send on the socket. I'm not clear if this is threadsafe, or if I should cancel the read before I write (which would be annoying).
Sockets are thread unsafe at the stream level. You have to provide synchronization. The only warranty is that you won't get copies of the exact same bytes in different read invocations no matter concurrency.
But at a Reader and, specially, Writer level, you might have some locking problems.
Anyway, you can handle read and write operations with the Socket's streams as if they were completely independent objects (they are, the only thing they share is their lifecyle).
Once you have provided correct synchronization among reader threads on one hand, and writer threads on the other hand, any number of readers and writers will be okay. This means that, yes, you can read on one thread and write on another (in fact that's very frequent), and you don't have to stop reading while writing.
One last advice: all of the operations involving threads have associated timeout, make sure that you handle the timeouts correctly.
You actually read from InputStream and write to OutputStream. They are fairly independent and for as long as you serialize access to each of them you are ok.
You have to correlate, however, the data that you send with data that you receive. That's different from thread safety.
Java java.net.Socket is not actually thread safe: Open the Socket source, and look at the (let say) connected member field and how it is used. You will see that is not volatile, read and updated without synchrinization. This indicates that Socket class is not designed to be used by multiple threads. Though, there is some locks and synchronization there, it is not consistent.`
I recommend not to do it. Eventually, use buffers(nio), and do socket reads/writes in one thread
For details go the the discussionv
You can have one thread reading the socket and another thread writing to it. You may want to have a number of threads write to the socket, in which case you have to serialize your access with synchronization or you could have a single writing thread which gets the data to write from a queue. (I prefer the former)
You can use non-blocking IO and share the reading and writing work in a single thread. However this is actually more complex and tricky to get right. If you want to do this I suggest you use a library to help you such as Netty or Mina.
Very interesting, the nio SocketChannel writes are synchronized
http://www.docjar.com/html/api/sun/nio/ch/SocketChannelImpl.java.html
The old io Socket stuff depends on the OS so you would have to look at the OS native code to know for sure(and that may vary from OS to OS)...
Just look at java.net.SocketOutputStream.java which is what Socket.getOutputStream returns.
(unless of course I missed something).
oh, one more thing, they could have put synchronization in the native code in every JVM on each OS but who knows for sure. Only the nio is obvious that synchronization exists.
This is how socketWrite in native code, so it's not thread safe from the code
JNIEXPORT void JNICALL
Java_java_net_SocketOutputStream_socketWrite0(JNIEnv *env, jobject this,
jobject fdObj,
jbyteArray data,
jint off, jint len) {
char *bufP;
char BUF[MAX_BUFFER_LEN];
int buflen;
int fd;
if (IS_NULL(fdObj)) {
JNU_ThrowByName(env, "java/net/SocketException", "Socket closed");
return;
} else {
fd = (*env)->GetIntField(env, fdObj, IO_fd_fdID);
/* Bug 4086704 - If the Socket associated with this file descriptor
* was closed (sysCloseFD), the the file descriptor is set to -1.
*/
if (fd == -1) {
JNU_ThrowByName(env, "java/net/SocketException", "Socket closed");
return;
}
}
if (len <= MAX_BUFFER_LEN) {
bufP = BUF;
buflen = MAX_BUFFER_LEN;
} else {
buflen = min(MAX_HEAP_BUFFER_LEN, len);
bufP = (char *)malloc((size_t)buflen);
/* if heap exhausted resort to stack buffer */
if (bufP == NULL) {
bufP = BUF;
buflen = MAX_BUFFER_LEN;
}
}
while(len > 0) {
int loff = 0;
int chunkLen = min(buflen, len);
int llen = chunkLen;
(*env)->GetByteArrayRegion(env, data, off, chunkLen, (jbyte *)bufP);
while(llen > 0) {
int n = NET_Send(fd, bufP + loff, llen, 0);
if (n > 0) {
llen -= n;
loff += n;
continue;
}
if (n == JVM_IO_INTR) {
JNU_ThrowByName(env, "java/io/InterruptedIOException", 0);
} else {
if (errno == ECONNRESET) {
JNU_ThrowByName(env, "sun/net/ConnectionResetException",
"Connection reset");
} else {
NET_ThrowByNameWithLastError(env, "java/net/SocketException",
"Write failed");
}
}
if (bufP != BUF) {
free(bufP);
}
return;
}
len -= chunkLen;
off += chunkLen;
}
if (bufP != BUF) {
free(bufP);
}
}
I have a BufferedReader (generated by new BufferedReader(new InputStreamReader(process.getInputStream()))). I'm quite new to the concept of a BufferedReader but as I see it, it has three states:
A line is waiting to be read; calling bufferedReader.readLine will return this string instantly.
The stream is open, but there is no line waiting to be read; calling bufferedReader.readLine will hang the thread until a line becomes available.
The stream is closed; calling bufferedReader.readLine will return null.
Now I want to determine the state of the BufferedReader, so that I can determine whether I can safely read from it without hanging my application. The underlying process (see above) is notoriously unreliable and so might have hung; in this case, I don't want my host application to hang. Therefore I'm implementing a kind of timeout. I tried to do this first with threading but it got horribly complicated.
Calling BufferedReader.ready() will not distinguish between cases (2) and (3) above. In other words, if ready() returns false, it might be that the stream just closed (in other words, my underlying process closed gracefully) or it might be that the underlying process hung.
So my question is: how do I determine which of these three states my BufferedReader is in without actually calling readLine? Unfortunately I can't just call readLine to check this, as it opens my app up to a hang.
I am using JDK version 1.5.
There is a state where some data may be in the buffer, but not necessarily enough to fill a line. In this case, ready() would return true, but calling readLine() would block.
You should easily be able to build your own ready() and readLine() methods. Your ready() would actually try to build up a line, and only when it has done so successfully would it return true. Then your readLine() could return the fully-formed line.
Finally I found a solution to this. Most of the answers here rely on threads, but as I specified earlier, I am looking for a solution which doesn't require threads. However, my basis was the process. What I found was that processes seem to exit if both the output (called "input") and error streams are empty and closed. This makes sense if you think about it.
So I just polled the output and error streams and also tried to determine if the process had exited or not. Below is a rough copy of my solution.
public String readLineWithTimeout(Process process, long timeout) throws IOException, TimeoutException {
BufferedReader output = new BufferedReader(new InputStreamReader(process.getInputStream()));
BufferedReader error = new BufferedReader(new InputStreamReader(process.getErrorStream()));
boolean finished = false;
long startTime = 0;
while (!finished) {
if (output.ready()) {
return output.readLine();
} else if (error.ready()) {
error.readLine();
} else {
try {
process.exitValue();
return null;
} catch (IllegalThreadStateException ex) {
//Expected behaviour
}
}
if (startTime == 0) {
startTime = System.currentTimeMills();
} else if (System.currentTimeMillis() > startTime + timeout) {
throw new TimeoutException();
}
}
}
This is a pretty fundamental issue with java's blocking I/O API.
I suspect you're going to want to pick one of:
(1) Re-visit the idea of using threading. This doesn't have to be complicated, done properly, and it would let your code escape a blocked I/O read fairly gracefully, for example:
final BufferedReader reader = ...
ExecutorService executor = // create an executor here, using the Executors factory class.
Callable<String> task = new Callable<String> {
public String call() throws IOException {
return reader.readLine();
}
};
Future<String> futureResult = executor.submit(task);
String line = futureResult.get(timeout); // throws a TimeoutException if the read doesn't return in time
(2) Use java.nio instead of java.io. This is a more complicated API, but it has non-blocking semantics.
Have you confirmed by experiment your assertion that ready() will return false even if the underlying stream is at end of file? Because I would not expect that assertion to be correct (although I haven't done the experiment).
You could use InputStream.available() to see if there is new output from the process. This should work the way you want it if the process outputs only full lines, but it's not really reliable.
A more reliable approach to the problem would be to have a seperate thread dedicated to reading from the process and pushing every line it reads to some queue or consumer.
In general, you have to implement this with multiple threads. There are special cases, like reading from a socket, where the underlying stream has a timeout facility built-in.
However, it shouldn't be horribly complicated to do this with multiple threads. This is a pattern I use:
private static final ExecutorService worker =
Executors.newSingleThreadExecutor();
private static class Timeout implements Callable<Void> {
private final Closeable target;
private Timeout(Closeable target) {
this.target = target;
}
public Void call() throws Exception {
target.close();
return null;
}
}
...
InputStream stream = process.getInputStream();
Future<?> task = worker.schedule(new Timeout(stream), 5, TimeUnit.SECONDS);
/* Use the stream as you wish. If it hangs for more than 5 seconds,
the underlying stream is closed, raising an IOException here. */
...
/* If you get here without timing out, cancel the asynchronous timeout
and close the stream explicitly. */
if(task.cancel(false))
stream.close();
You could make your own wrapper around InputStream or InputStreamReader that works on a byte-by-byte level, for which ready() returns accurate values.
Your other options are threading which could be done simply (look into some of the concurrent data structures Java offers) and NIO, which is very complex and probably overkill.
If you just want the timeout then the other methods here are possibly better. If you want a non-blocking buffered reader, here's how I would do it, with threads: (please note I haven't tested this and at the very least it needs some exception handling added)
public class MyReader implements Runnable {
private final BufferedReader reader;
private ConcurrentLinkedQueue<String> queue = new ConcurrentLinkedQueue<String>();
private boolean closed = false;
public MyReader(BufferedReader reader) {
this.reader = reader;
}
public void run() {
String line;
while((line = reader.readLine()) != null) {
queue.add(line);
}
closed = true;
}
// Returns true iff there is at least one line on the queue
public boolean ready() {
return(queue.peek() != null);
}
// Returns true if the underlying connection has closed
// Note that there may still be data on the queue!
public boolean isClosed() {
return closed;
}
// Get next line
// Returns null if there is none
// Never blocks
public String readLine() {
return(queue.poll());
}
}
Here's how to use it:
BufferedReader b; // Initialise however you normally do
MyReader reader = new MyReader(b);
new Thread(reader).start();
// True if there is data to be read regardless of connection state
reader.ready();
// True if the connection is closed
reader.closed();
// Gets the next line, never blocks
// Returns null if there is no data
// This doesn't necessarily mean the connection is closed, it might be waiting!
String line = reader.readLine(); // Gets the next line
There are four possible states:
Connection is open, no data is available
Connection is open, data is available
Connection is closed, data is available
Connection is closed, no data is available
You can distinguish between them with the isClosed() and ready() methods.