I was wondering that what is the best/appropriate way to release file resources/handles.
Traditional code,
BufferredInputStream stream = null
try{
----
stream = new BufferredInputStream(new FileInputStream());
----
} finally{
if(stream != null){
stream.close()
}
}
Will the file handle be released by closing BufferredInputStream.close alone or it needs the underlying stream(i.e. FileInputStream.close()) also to be called explicitly.
P.S. Javadoc for [FilterOutputStream.close] method specifies that it will explicitly close the underlying stream too. But other streams doesn't seem to have this in the doc.
[FilterOutputStream.close]: http://docs.oracle.com/javase/1.4.2/docs/api/java/io/FilterOutputStream.html#close%28%29
Please advice. Thanks in advance.
You can always check the source code for the underlying class to determine the exact behavior.
However, in this case calling close() on BufferedInputStream will also close the underlying stream i.e. FileInputStream.
The source code is available here
Your approach is correct. When in doubt, always check the source code. http://www.docjar.com/html/api/java/io/BufferedInputStream.java.html the close method is closing "in" which was chained to BufferedInputStream.
When multiple streams are chained closing the stream that was last to be constructed will close the underlying stream. So, closing BufferedInputStream will also close the underlying FileInputStream.
So you just call close() on one stream and it will automatically close the underlying stream.
BufferredInputStream doesn't itself hold any system resources so BufferredInputStream.close() will simple propagate the close call to InputStream it wraps.. so it should do just fine.
Related
Is there any reason for calling close methods on the StreamWriter class? Why do I have to do it? If I don't close the StreamWriter will I get some kind of undefined behavior?
Assuming you're talking about java.io.OutputStreamWriter, yes, you should close it, in a finally block, when you don't want to write anything more. This allows the underlying OutputStream to be closed. If the underlying OutputStream is a FileOutputStream, it will release the file descriptor (which is a limited OS resource), and allow other aps to read the file. If it's a SocketOutputSTream, it will signal to the other side that it shouldn't expect anything more from the socket input stream.
In general, streams and readers/writers must always be closed properly. If using Java 7, use the new try-with-resources construct to make sure it's done automatically for you.
The operating system manages files, and if in java the file is not closed, system wide resources are lost.
In java 7 you can however use
try (OutputStreamWriter outWriter = new OuputStreamWriter(outStream, "UTF-8")) {
...
}
without close. (Output streams and writers implement Closeable).
BTW #PriestVallon was just trying to make you formulate your question a bit better/attractive for answering. A "light" response to that can be misunderstood as you've seen.
Writing and reading streams involves quite often the use of os resources,as sockets,file handles and so on.if you're writing on a stream you should also close it,im order to release resources you may have obtained(it depends on the actualresources you are using beneath the stream). Sometimes closing a stream writer involves the release of an exclusive allocation of a resource, or the flushing of temporary data to the stream.
Sometimes the close is uneffective, it depends on the kind of stream you have, but the interface must take care of all the cases where a stream have to be closed.
As per the java docs, invoking close() on any java.io Streams automatically invokes flush(). But I have seen in lot of examples, even in production codes, developers have explicitly used flush() just before close(). In what conditions we need to use flush() just before close()?
Developer get into a habit of calling flush() after writing something which must be sent.
IMHO Using flush() then close() is common when there has just been a write e.g.
// write a message
out.write(buffer, 0, size);
out.flush();
// finished
out.close();
As you can see the flush() is redundant, but means you are following a pattern.
I guess in many cases it's because they don't know close() also invokes flush(), so they want to be safe.
Anyway, using a buffered stream should make manual flushing almost redundant.
I want to point out an important concept that many previous comments have alluded to:
A stream's close() method does NOT necessarily invoke flush().
For example org.apache.axis.utils.ByteArray#close() does not invoke flush().
(click link to see source code)
The same is true more generally for any implementations of Flushable and Closeable. A prominent example being java.io.PrintWriter. Its close() method does NOT call flush().
(click link to see source code)
This might explain why developers are cautiously calling flush() before closing their streams. I personally have encountered production bugs in which close() was called on a PrintWriter instance without first calling flush().
The answers already provided give interesting insights that I will try to compile
here.
Closeable and Flushable being two independent traits, Closeable do not specify that close() should call flush(). This means that it is up to the implementation's documentation (or code) to specify whether flush() is called or not. In most cases it is the norm, but there is no guaranty.
Now regarding what #Fabian wrote: It is true that java.io.PrintWriter's close() method does not call flush(). However it calls out.close() (out being the underlying writer). Assuming out is a BufferedWriter, we are fine since BufferedWriter.close() is flushing (according to its doc). Had it be another writer, it may not have been the case...
So you have two choices:
either you ensure that at least one inner Writer/Stream flushes by itself (beware in case of code refactoring),
or you just call flush() and you're on the safe side all the time.
Solution 2, requiring less work, is my preferred one.
I could not find clarification of this in the documentation.
But when we have a Process object and call getInputStream(),
Do we get a new stream that we should explicitly close when we are done with it?
or
do we get the stream that is already there, associated with the Process, that we should not close, but the Process would take care of closing it?
Basically, how should we interact with the stream we get from Process.getInputStream()? close or not to close?
From reading UNIXProcess.java, this is what happens:
We need to distinguish between two states: either process is still alive, or it is dead.
If the process is alive, by closing OutputStream (goes to stdin of the process), you are telling the process that there is no more input for it. By closing InputStreams (stdout, stderr of the process), process is no longer to write to these (it will get SIGPIPE if it tries).
When process dies, Java will buffer remaining data from stdout/stderr, and close all three streams for you (it is running "process reaper" thread, which is notified on process death). Any attempt to write to OutputStream will fail. Reading from InputStream will return buffered data, if any. Closing any of them has no benefit, but also causes no harm. (Underlying file descriptors are closed by this time).
My first reactions was to close it, you always close streams that you open. I do realize the documentation is not up to par, but since they don't explicitly state do not close that to me means follow good programming practices.
InputStream is = process.getInputStream()
try {
// your code
} finally {
try { is.close(); } catch (Exception ignore) {}
}
If you need to make sure this isn't problematic, just write a quick test case where you great from the input stream a few dozen times, each time opening and closing the InputStream.
When you call Process.getInputStream() you get an existing input stream that was set up for the process. When the process dies, that input stream does not go away automatically - think of it as a buffer that you can still read from. The process's end of the pipe might be closed, but your end is not. It is your responsibility to close it, though GC will eventually get it.
You should also close the other two: getErrorStream() and getOutputStream().
You do not close streams, that you did not open - that's a nasty side effect.
If you created the process, kill it first and close streams after that.
I always close them! I am not 100% sure, but as far as I know if you leave the inputstream open, the file will be open until you close it!! So follow the "standard rules" and close it! follow an example:
Process Builder waitFor() issue and Open file limitations
Assume that I have the following code fragment:
operation1();
bw.close();
operation2();
When I call BufferedReader.close() from my code, I am assuming my JVM makes a system call that ensures that the buffer has been flushed and written to disk. I want to know if close() waits for the system call to complete its operation or does it proceed to operation2() without waiting for close() to finish.
To rephrase my question, when I do operation2(), can I assume that bw.close() has completed successfully?
when I do operation2(), can I assume that bw.close() has completed successfully?
Yes
Close the stream, flushing it first. Once a stream has been closed, further write() or flush() invocations will cause an IOException to be thrown. Closing a previously-closed stream, however, has no effect.
Though the documentation does not say anything specifically, I would assume this call does block until finished. In fact, I'm pretty sure nothing in the java.io package is non-blocking.
The JavaDoc for the java.io.BufferedReader.close() is taken exactly from the contract if fulfills with the java.io.Reader.
The Doc says:
Closes the stream and releases any system resources associated with it. Once the stream has been closed, further read(), ready(), mark(), reset(), or skip() invocations will throw an IOException. Closing a previously closed stream has no effect.
While this makes no explicit claim of blocking until the file system is complete, with this same instance of BufferedReader all other operations will throw an exception if close() returns. Although the JavaDoc could be seen as ambiguous about when the operation completes, if the file system flush and close were not complete when this method returned it would violate the spirit of the contract and be a bug in Java (implementation or documentation).
NO! You cannot be sure for the following reason:
A BufferedWriter is a Wrapper for another Writer. A close() to the BufferedWriter just propagates to the underlying Writer.
IF this underlying Writer is an OutputStreamWriter, and IF the OutputStream is a FileOutputStream, THEN the close will issue a system call to close the file handle.
You are completely free to even have a Writer where close() is a noop, or where the close is implemented non-blocking, but when using only classes from java.io, this is never the case.
A Writer (or BufferedWriter) is a black box that writes a stream of characters somewhere, not necessarily to the disk. A call to close() must (by method contract) flush its buffered content before closing, and should (normally) block before all its "essential" work is done. But this would depend on the implementation and the environment (you cannot know about caches that are below the Java layer, for example). In what respects of the work to be done by the Java writer itself (eg: make the system call to write to disk, in the case of a FileWriter or akin, and close the filehandle) , yes, you can assume that when close() returns it has already done all its work.
In general with any i/o operation you can make no assumptions about what has happened after the write() operation completes, even after you close. The idea of delivery is a subjective concept relative to the medium.
For instance, what if the writer represents a TCP connection, and then the data is lost inbetween client and server? Or what if the kernel writes data to a disk, but the drive physically fails to write it? Or if the writer represents a carrier pigeon that gets shot en route?
Furthermore, imagine the case when the write has no way of confirming that the endpoint has received the data (read: udp/datagrams). What should the blocking policy be in that situation?
The buffer will have been flushed to the operating system and the file handle closed, so the Java operations required will have been completed.
BUT the operating system will have cached or queued the write to the actual disk, pipe, network, whatever - there is no guarantee that the physical write has completed. FileChannel.force() provides a way to do that for files on local disks: see the Javadoc.
Yes, IF you reach operation2();, the stream would've had to have been completely closed. However, close() throws IOException, so you may not even get to operation2();. This may or may not be the behavior that you expect.
I have a question in my mind that, while writing into the file, before closing is done, should we include flush()??. If so what it will do exactly? dont streams auto flush??
EDIT:
So flush what it actually do?
Writers and streams usually buffer some of your output data in memory and try to write it in bigger blocks at a time. flushing will cause an immediate write to disk from the buffer, so if the program crashes that data won't be lost. Of course there's no guarantee, as the disk may not physically write the data immediately, so it could still be lost. But then it wouldn't be the Java program's fault :)
PrintWriters auto-flush (by default) when you write an end-of-line, and of course streams and buffers flush when you close them. Other than that, there's flushing only when the buffer is full.
I would highly recommend to call flush before close. Basically it writes remaining bufferized data into file.
If you call flush explicitly you may be sure that any IOException coming out of close is really catastrophic and related to releasing system resources.
When you flush yourself, you can handle its IOException in the same way as you handle your data write exceptions.
You don't need to do a flush because close() will do it for you.
From the javadoc:
"Close the stream, flushing it first. Once a stream has been closed, further write() or flush() invocations will cause an IOException to be thrown. Closing a previously-closed stream, however, has no effect."
To answer your question as to what flush actually does, it makes sure that anything you have written to the stream - a file in your case - does actually get written to the file there and then.
Java can perform buffering which means that it will hold onto data written in memory until it has a certain amount, and then write it all to the file in one go which is more efficient. The downside of this is that the file is not necessarily up-to-date at any given time. Flush is a way of saying "make the file up-to-date.
Close calls flush first to ensure that after closing the file has what you would expect to see in it, hence as others have pointed out, no need to flush before closing.
Close automatically flushes. You don't need to call it.
There's no point in calling flush() just before a close(), as others have said. The time to use flush() is if you are keeping the file open but want to ensure that previous writes have been fully completed.
As said, you don't usually need to flush.
It only makes sense if, for some reason, you want another process to see the complete contents of a file you're working with, without closing it. For example, it could be used for a file that is concurrently modified by multiple processes, although with a LOT of care :-)
FileWriter is an evil class as it picks up whatever character set happens to be there, rather than taking an explicit charset. Even if you do want the default, be explicit about it.
The usual solution is OutputStreamWriter and FileOutputStream. It is possible for the decorator to throw an exception. Therefore you need to be able to close the stream even if the writer was never constructed. If you are going to do that, you only need to flush the writer (in the happy case) and always close the stream. (Just to be confusing, some decorators, for instance for handling zips, have resources that do require closing.)
Another usecase for flushing in program is writing progress of longrunning job into file (so it can be stopped and restarted later. You want to be sure that data is safe on the drive.
while (true) {
computeStuff();
progresss += 1;
out.write(String.format("%d", progress));
out.flush();
}
out.close();