To produce two diferent files to a ZipStream I am using two methods receiving an outputStream something like this:
private void produceFileA (OutputStream stream){
final PrintStream printer = new PrintStream (stream);
......
}
private void produceFileB (OutputStream stream) {
final PrintStream printer = new PrintStream (stream);
.....
}
client:
private void produceZip () {
try (ZipOutputStream zipStream = new ....) {
...
produceFileA (zipStream);
...
produceFileB (zipStream);
....
}
}
both will receive the same ZipOutputStream so I dont see how I can close the PrintStream created here if produceFileA closes it the produceFileB will not work anymore since it will receive a closed stream.. it is ok if I let them unclosed after finishing the methods call? (the underlying zipStream is closed by the client of these methods)
I am trying to be sure my PrintStreams dont leak any resource.
Related
I am planning a function that creates and returns an InputStream that in turn reads from another InputStream because the initialization of that InputStream is not trivial and I would like to use it in multiple places. Consider this simple example:
private static InputStream openStream() throws IOException {
Path path = Paths.get("/etc/passwd");
InputStream inputStream = Files.newInputStream(path);
return new BufferedInputStream(inputStream);
}
I will use this function as follows:
public static void main(String[] args) {
try (InputStream stream = openStream()) {
byte[] buffer = new byte[1024];
int numBytes;
while ((numBytes = stream.read(buffer, 0, buffer.length)) > 0) {
System.out.printf("Just read %d bytes from stream!%n", numBytes);
}
} catch (IOException e) {
e.printStackTrace();
}
}
However, I am concerned that closing the BufferedInputStream in this example will not close the InputStream inside it. Will this lead to orphaned file handles and memory leaks if called multiple times? What is a better solution for this?
A simple solution I could think of is to define a closable container class and put both input streams into that class. When calling close(), this class would simply close all its open handles.
class StreamContainer implements Closeable {
private final InputStream[] inputStreams;
public StreamContainer(InputStream... inputStreams) {
this.inputStreams = inputStreams;
}
#Override
public void close() throws IOException {
for (InputStream inputStream : this.inputStreams) {
inputStream.close();
}
}
}
But I suppose, there might be a better solution, built-in mechanic or development pattern. Or maybe these constructs should be avoided?
In this cases you should read the code source of the BufferedInputStream, this is the close definition
public void close() throws IOException {
while(true) {
byte[] buffer;
if ((buffer = this.buf) != null) {
if (!U.compareAndSetObject(this, BUF_OFFSET, buffer, (Object)null)) {
continue;
}
InputStream input = this.in;
this.in = null;
if (input != null) {
input.close();
}
return;
}
return;
}
}
As you can see when closing the BufferedInputStream, the underlying input stream is closed as well.
And this is the documentation of close:
public void close() throws IOException
Closes this input stream and releases any system resources associated with the stream. Once the
stream has been closed, further read(), available(), reset(), or
skip() invocations will throw an IOException. Closing a previously
closed stream has no effect.
This question already has answers here:
What is a NullPointerException, and how do I fix it?
(12 answers)
Closed 3 years ago.
I am creating a an application where a client can send TCP packets to a server and the server will respond.However, when I flush() my buffered data to an outputstream, it produces a nullpointer exception. For each new client that connects, a new thread with a class is created. Here is the class:
public class clientHandler implements Runnable{
private boolean loggedin = false;
private String username = "";
Socket cs;
OutputStream rawoutput; //for some reason I had to pass the outputstream directly to the constructor rather than get it here, here it would produce a npe
clientHandler(Socket clientSocket, OutputStream bos){
rawoutput = bos;
cs = clientSocket;
}
DataOutputStream output;
{
try {
output = new DataOutputStream(rawoutput);
} catch (Exception e) {
System.out.println("rip OS");
e.printStackTrace();
}
}
//a couple more methods...
private void metaData(String data) throws Exception{
if(data.startsWith("/iusername")){
String[] useful = data.split(";");
System.out.println(useful[0].split(":")[1] + " logged in");
output.writeBytes("logged in");
output.flush();
username = useful[0].split(":")[1];
loggedin = true;
}
}
}
I am using netcat as a 'client' until I write my own. Is this perhaps why it causes the NPE? I have tried this all different kinds of outputstreams and writers and they all produce error at either the initialization, the write or the flush.
Thanks to the user John Skeet, I fixed the error. The problem was that I had the output DataOutputStream in an initialization block without realizing it. So to fix I had to just move that to the run() function and now everything works.
JavaDoc for InputStreamReader doesn't say anything about closing the underlying InputStream:
https://docs.oracle.com/javase/8/docs/api/java/io/InputStreamReader.html#close--
Description copied from class: Reader
Closes the stream and releases any system resources associated with it. Once the stream has been closed, further read(), ready(), mark(), reset(), or skip() invocations will throw an IOException. Closing a previously closed stream has no effect.
Does closing an InputStreamReader also close the underlying InputStream?
UPDATE In:
InputStreamReader istream = new InputStreamReader(conn.getInputStream(), "UTF-8")
istream.close();
Do I need to close conn.getInputStream()?
InputStreamReader implementation direct close call to StreamDecoder which is a native class.
As other answers and comments said, the answer is yes, it does close the InputStream. You can see for yourself with the following code:
InputStream is = new FileInputStream("D:\\a.txt");
Reader r = new InputStreamReader(is);
r.close();
is.read(); // throws exception: stream is closed.
Therefore, if you close the Reader, you don't need to also close the InputStream. However, I guess you are using try-with-resources everywhere (aren't you? ;) ) and the InputStream as well as the Reader will both be closed at the end of the try block. That doesn't matter, because an InputStream can be closed multiple times; it's a no-op if the stream is already closed.
If you want to avoid closing the InputStream, you can write a simple wrapper that does nothing when it is closed:
class UncloseableInputStream extends FilterInputStream {
public UncloseableInputStream(InputStream is) {
super(is);
}
public void close() {
// Do nothing.
}
}
InputStream is = new FileInputStream("D:\\a.txt");
Reader r = new InputStreamReader(new UncloseableInputStream(is));
r.close();
is.read(); // still works despite closing the reader.
It depends on stream implementation. InputStream is just an "interface" in terms of close(). InputStreamReader will not close an interface. It will close the underlying data resource (like file descriptor) if it is. It will do nothing if close is override and empty in an implementation.
In OpenJdk StreamDecoder has a method
void implClose() throws IOException {
if(this.ch != null) {
this.ch.close();
} else {
this.in.close();
}
}
this.in is a InputStream from decoder constructor:
StreamDecoder(InputStream var1, Object var2, CharsetDecoder var3) {
...
if(this.ch == null) {
this.in = var1;
...
}
...
}
Here are examples of closing actions. ByteArrayInputStream:
Closing a ByteArrayInputStream has no effect. The methods in this class can be called after the stream has been closed without generating an IOException.
public void close() throws IOException {
}
FileInputStream differes:
Closes this file input stream and releases any system resources associated with the stream. If this stream has an associated channel then the channel is closed as well. After you closed the underlying instance it doesn't matter which interfaces were using it, it will be closed.
public void close() throws IOException {
synchronized (closeLock) {
if (closed) {
return;
}
closed = true;
}
if (channel != null) {
channel.close();
}
fd.closeAll(new Closeable() {
public void close() throws IOException {
close0();
}
});
}
I'm experimenting with Serialization and wrote the following class:
public static void main(String[] args) throws ClassNotFoundException{
File file = new File("D:/serializable.txt");
try(FileOutputStream fos = new FileOutputStream(file);
ObjectOutputStream ous = new ObjectOutputStream(fos);
FileInputStream fis = new FileInputStream(file);
ObjectInputStream ois = new ObjectInputStream(fis)){
// SerialTest st = new SerialTest();
// ous.writeObject(st);
SerialTest st = (SerialTest) ois.readObject();
System.out.println(st);
} catch (IOException e) {
e.printStackTrace();
}
}
Serialized class:
public static class SerialTest implements Serializable{
private int count;
private Object object;
public int count(){
return count;
}
public Object object(){
return object;
}
private void readObject(ObjectOutputStream ous) throws IOException{
ous.writeObject(object);
ous.writeInt(count);
}
private void writeObject(ObjectInputStream ois) throws IOException, ClassNotFoundException{
ois.readInt();
ois.readObject();
}
}
After serializing the object as in the commeted code, I tried to desirialize it as specified here. I got
java.io.EOFException
at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2598)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1318)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
Moroever, the content of the file containing the serialized object is chaged.
But when I remove the resource-declarations
FileOutputStream fos = new FileOutputStream(file);
ObjectOutputStream ous = new ObjectOutputStream(fos);
from the try-with-resources clasuse it works completely fine. Why? Why the resource declarations affects the deserialization?
Well without the write calls, you're currently creating an empty file to start with, because that's what the FileOutputStream constructor you're calling does. If the file already exists, it is truncated to be 0 bytes long. So when you then try to read an object from it, there's nothing to read.
Even with the writing part uncommented, there's still the possibility of buffering issues, where the data hasn't actually been written to the file yet.
I would strongly urge you to write the file and close the output stream, and then separately open it for input. Having the same file open for read and write at the same time seems like a recipe for confusing results to me.
So the code would look something like this:
// Exception handling omitted as this is just test code
public static void main(String[] args) throws Exception {
File file = new File("D:/serializable.txt");
try (FileOutputStream fos = new FileOutputStream(file);
ObjectOutputStream ous = new ObjectOutputStream(fos)) {
SerialTest st = new SerialTest();
ous.writeObject(st);
}
try (FileInputStream fis = new FileInputStream(file);
ObjectInputStream ois = new ObjectInputStream(fis)) {
SerialTest st = (SerialTest) ois.readObject();
System.out.println(st);
}
}
Leaving aside the fact that you obviously have the content of readObject() and writeObject() back to front:
private void readObject(ObjectOutputStream ous) throws IOException{
ous.writeObject(object);
ous.writeInt(count);
}
Here you are writing first the object then the integer.
private void writeObject(ObjectInputStream ois) throws IOException, ClassNotFoundException{
ois.readInt();
ois.readObject();
}
Here you are reading first the integer then the object.
Ain't gonna work.
BUT ... You don't need either of these methods. Remove them. Or, if you want to serialize data of the parent class, fix them to (a) call defaultReadObject() and defaultWriteObject() respectively, and get rid of what you already have in there, which will already happen by default, or at least when you read the object and the integer in the same order as you wrote them, store them into the respective instance members.
NB serialized data isn't text and shouldn't be stored in files named .txt.
I have a flow which I have just converted from synchronous into queued-asynchronous.
At some point in a foreach, I am opening a file and setting a FileInputStream, as follows:
public class FileAsStream {
// return a fileInputStream.
public FileInputStream fileAsStream(String fileName) {
File file = new File(fileName);
FileInputStream fis = null;
try {
fis = new FileInputStream(file);
} catch (IOException e) {
e.printStackTrace();
} finally {
return fis;
}
}
The FileInputStream then becomes my payload, and I'm sending it off to http or sftp endpoints. When the flow was synchronous, I could then run #[payload.close()] and close the stream. But now that it is asynchronous it doesn't look like I can. It fails when I attempt to close the stream. My question is whether it matters if I close the stream or not. Does Mule wrap up objects created in the context of the flow? Or do I need to somehow close the stream after it has been sent to the endpoint?