Does nesting a FileInputStream in a BufferedInputStream create a memory leak? - java

If I create my BufferedInputStream via…
FileInputStream fos = new FileInputStream(FILE_TO_READ);
BufferedInputSteam bos new BufferedInputSteam(fos);
…
bos.close();
fos.close();
I can close the FileInputStream explicitly. But if I create it nested…
BufferedInputSteam bos new BufferedInputSteam(new FileInputStream(FILE_TO_READ));
…
bos.close();
I can not close the FileInputStream explicitly.
Is this a memory leak?

I don't believe so. According to the Java Documention found here
Closes this file input stream and releases any system resources associated with the stream.
If this stream has an associated channel then the channel is closed as well.

InputStream classes are based on the Decorator Pattern so no memory issue, it will be properly closed.
You just have to close the top level InputStream
bos.close();

You can't close the underlying input stream, because the BufferedInputSteam closes it for you. It's meant to be a convenience and it's only reasonable. Think about it, *why should you be allowed access to the underlying FileInputStream, independently of the BufferedInputSteam that encapsulates it? Such access will likely allow you to break the BufferedInputSteam.
Trying to close an already closed stream will give you an IOException: Stream already closed exception anyway

Related

InputStream closes and sonar issues

I have the following code to open a zip file that contains several files, and extracts information from each file:
public static void unzipFile(InputStream zippedFile) throws IOException {
try (ZipInputStream zipInputStream = new ZipInputStream(zippedFile)) {
for (ZipEntry zipEntry = zipInputStream.getNextEntry(); zipEntry != null; zipEntry = zipInputStream.getNextEntry()) {
BufferedReader reader = new BufferedReader(new InputStreamReader(new BoundedInputStream(zipInputStream, 1024)));
//Extract info procedure...
}
}
}
In summary, I pick each file from the zip, and open it with a BufferedReader to read the information from it. I'm also using BoundedInputStream (org.apache.commons.io.input.BoundedInputStream) to limit buffer size and avoid unwanted huge lines on the files.
It works as expected, however I'm getting this warning on Sonar:
Use try-with-resources or close this "BufferedReader" in a "finally" clause.
I just can't close (or use try-with-resources, like I did on the beginning of the method) the BufferedReaders I create - if I call the close method, the ZipInputStream will close. And the ZipInputStream is already under try-with-resources...
This sonar notification is marked as critical, but I believe it is a false positive. I wonder if you could clarify to me - am I correct, or should I handle this in a different way? I don't want to leave resource leaks in the code, since this method will be called several times and a leak could cause a serious damage.
The sonar notification is correct in that there is technically a resource leak that could eat up resources over time (see garbage collection and IO classes). In order to avoid closing the underlying ZipInputStream, consider passing the ZipEntry into the BoundedInputStream in the for loop as per this SO question: reading files in a zip file. Thus, when the BufferedReader is closed, the BoundedInputStream is closed and not the ZipInputStream.
Thanks to the answers here, I could address my issue this way:
BoundedInputStream boundedInputStream = new BoundedInputStream(zipInputStream, MAX_LINE_SIZE_BYTES);
boundedInputStream.setPropagateClose(false);
try(BufferedReader reader = new BufferedReader(new InputStreamReader(boundedInputStream))) { ...
With boundedInputStream.setPropagateClose(false); I can close the BufferedReader without closing the zipInputStream.

Do I need to close a stream?

Do I need to close a FileOutputStream in the following example? And why?
FileOutputStream fos = new FileOutputStream("bytes.info");
ObjectOutputStream oos = new ObjectOutputStream(fos);
oos.writeObject(data);
oos.close();
If there were no exceptions thrown, then the FileOutputStream would be closed by ooo.close().
An exception thrown in writeObject would prevent any of the streams from being closed. So the close call should be in a finally block.
There's the additional problem that the ObjectOutputStream could throw an exception in its constructor. It writes the stream header in the constructor which can cause an exception. In this case, the FileOutputStream needs to be closed, but calling oos.close() is not possible because there's no reference to the ObjectOutputStream. So you really need two separate calls to close, one for each stream, both in finally blocks.
Using try-with-resources takes care of all of this for you:
try(
FileOutputStream fos = new FileOutputStream("bytes.info");
ObjectOutputStream oos = new ObjectOutputStream(fos)
) {
oos.writeObject(data);
}
Yes, you need to close the stream. Leaving FileOutputStream unclosed creates a possibility that some data that has been successfully written to the stream does not get saved to the file. If a program opens multiple file streams, not closing them creates a possibility of failures due to running out of native resources (too many files opened simultaneously).
FileOutputStream manages native resources, which are released by the close method. The class has a finalizer, too, which releases resources as well. As part of releasing native resources, the stream finishes out the writing of buffered data, if there is any. However, since JVM does not guarantee that a finalizer is going to be called on every object, failure to call close creates a risk of leaving buffered data unwritten.
Of course you have to close the FileOutputStream file. If not sometimes the data you save into the file might not get saved and you will end up with an empty file after executing the program. And you might wanna use try with resources so you won't have to close it manually and can do the exception handling part both at once.
try (FileOutputStream fos = new FileOutputStream("bytes.info");
ObjectOutputStream oos = new ObjectOutputStream(fos));{
}catch(){}
Just in case if you are not using try with resources close the file streams in the finally block manually.
FileOutputStream fos = null;
ObjectOutputStream oos = null;
try{
fos = new FileOutputStream("bytes.info");
oos = new ObjectOutputStream(fos));
oos.writeObject(data);
}catch(){
}finally{
if(fos != null){
fos.close();
}
if(oos != null){
oos.close();
}
}
It is a must to check whether those file streams are null or not. Because if they are null then there will be another error. Still it's better to use try with resources.

Java Try-with-resource storing input stream in Map

In my API (Spring boot) I have an endpoint where users can upload multiple file at once. The endpoint takes as input a list of MultipartFile.
I wish not to directly pass this MultipartFile object to the service directly so I loop through each MultipartFile and create a simple map that stored the filename and its InputStream.
Like this:
for (MultipartFile file : files) {
try (InputStream is = multipartFile.getInputStream()) {
filesMap.put(file.getOriginalFilename(), is);
}
}
service.uploadFiles(filesMap)
My understanding for Java streams and streams closing is quite limited.
I thought that try-with-resources automatically closes the InputStream once the code reached the end of the try block.
In the above code when does exactly the the multipartFile.getInputStream() gets closed?
The fact that I'm storing the stream in a map will that cause a memory leak?
Stream closes right after execution reaches closing bracket of try block.
It is okay to store InputStream anywhere after you closed it.
But be aware of that you can't read anything from this stream after you closes it.
Thanks to comments
Also, be aware of that some streams have special behavior on close() and it always depends on Stream realization.
For example:
If you try to read from closed FileInputStream you will get
java.io.IOException: Stream Closed
If you try to read from closed ByteArrayInputStream it will be okay, because of it's special close() realization: public void close() throws IOException {}
When does exactly the multipartFile.getInputStream() gets closed?
try (InputStream is = multipartFile.getInputStream()) {
filesMap.put(file.getOriginalFilename(), is);
} // <-- here
The try-with-resources statement ensures that each resource is closed at the end of the statement.
The fact that I'm storing the stream in a map will that cause a memory leak?
No, your collection just keeps closed InputStreams and you won't be able to read from them (in addition, you will get IOException).

If I close ObjectOuputStream, then do not need to close FileOutputStream?

My code like below.
Map<String, String> aMap = new HashMap();
aMap.put("A", "a");
FileOutputStream fos = new FileOutputStream(new File("some.txt"));
ObjectOutputStream oos = new ObjectOutputStream(fos);
oos.write(aMap);
oos.flush();
oos.close();
I thought I need to close fos, but other says its fine.
Is it really fine to not close FileOutputStream, because I already closed inner OutputStream?
Yes, You don't need to close it separately. If you close your oos, it will internally close fos as well. Closing the outer most stream will delegate it all the way down
No you dont need to close FileOutputStream.
If you check the code of close() you will found that it closes the output stream.
Plz see the doc http://docs.oracle.com/javase/7/docs/api/java/io/ObjectOutputStream.html
You don't need to do this explicitly, it will be done automatically. Take a look at the example from javadoc:
FileOutputStream fos = new FileOutputStream("t.tmp");
ObjectOutputStream oos = new ObjectOutputStream(fos);
oos.writeInt(12345);
oos.writeObject("Today");
oos.writeObject(new Date());
oos.close();
The link could be found here: Class ObjectOutputStream
It's a general rule in Java: if you have several chained/nested streams say
outStream3(outStream2(outStream1)) (I am writing this just in pseudo-code) you
usually need to close only the outermost stream - i.e. outStream3 in this case.
Internally when you call close on outStream3, it will call close on outStream2
which will call close on outStream1. There are a few exceptions to this rule, but
this is the general rule you can remember.

Client-Server File Transfer in Java

I'm looking for an efficient way to transfer files between client and server processes using TCP in Java. My server code looks something like this:
socket = serverSocket.accept();
InputStream is = socket.getInputStream();
OutputStream os = socket.getOutputStream();
FileInputStream fis = new FileInputStream(new File(filename));
I'm just unsure of how to proceed. I know I want to read bytes from fis and then write them to os, but I'm unsure about the best way to read and write bytes using byte streams in Java. I'm only familiar with writing/reading text using Writers and Readers. Can anyone tell me the appropriate way to do this? What should I wrap os and fis in (if anything) and how do I keep reading bytes until the end of file without a hasNext() method (or equivalent)
You could do something like:
byte[] contents = new byte[BUFFER_SIZE];
int numBytes =0;
while((numBytes = is.read(contents))>0){
os.write(contents,0,numBytes);
}
You could use Apache's IOUtils.copy(in, out) or
import org.apache.commons.fileupload.util.Streams;
...
Streams.copy(in, out, false);
Inspecting the source might prove interesting. ( http://koders.com ?)
There is the java.nio.Channel with a transferTo method, with mixed opinions in the community wether better for smaller/larger files.
A simple block wise copy between Input/OutputStream would be okay. You could wrap it in buffered streams.

Categories