I have a web application. I am using java and spring. The application can create a file and send it to the browser, this is working fine. The way I do it is:
I create the file in a Services class, and the method returns the address to the controller. The controller then sends the file, and it is downloaded correctly. The code for the controller method is this.
#RequestMapping("/getFile")
public #ResponseBody
FileSystemResource getFile() {
String address = Services.createFile();
response.setContentType("application/vnd.ms-excel");
return new FileSystemResource(new File (address));
}
The problem is that the file is saved in the server, and after many requests it will have a lot of files. I have to delete them manually. The question is: How can I delete this file after sending it? or Is there a way to send the file without saving it in the server?
Don't use #ResponseBody. Have Spring inject the HttpServletResponse and write directly to its OutputStream.
#RequestMapping("/getFile")
public void getFile(HttpServletResponse response) {
String address = Services.createFile();
File file = new File(address);
response.setContentType("application/vnd.ms-excel");
response.setHeader("Content-disposition", "attachment; filename=" + file.getName());
OutputStream out = response.getOutputStream();
FileInputStream in = new FileInputStream(file);
// copy from in to out
IOUtils.copy(in,out);
out.close();
in.close();
file.delete();
}
I haven't added any exception handling. I leave that to you.
FileSystemResource is really just is a wrapper for FileInputStream that's used by Spring.
Or, if you want to be hardcore, you could make your own FileSystemResource implementation with its own getOutputStream() method that returns your own implementation of FileOutputStream that deletes the underlying file when you call close() on it.
So I decided to take Sotirious's suggestion for a "hardcore" way. It is pretty simple, but has one problem. If user of that class opens input stream once to check something and closes it, it will not be able to open it again since file is deleted on close. Spring does not seem to do that, but you will need to check after every version upgrade.
public class DeleteAfterReadeFileSystemResource extends FileSystemResource {
public DeleteAfterReadeFileSystemResource(File file) {
super(file);
}
#Override
public InputStream getInputStream() throws IOException {
return new DeleteOnCloseFileInputStream(super.getFile());
}
private static final class DeleteOnCloseFileInputStream extends FileInputStream {
private File file;
DeleteOnCloseFileInputStream(File file) throws FileNotFoundException {
super(file);
this.file = file;
}
#Override
public void close() throws IOException {
super.close();
file.delete();
}
}
}
A minor adaption to this answer.
Using a InputStreamResource instead of FileSystemResource makes this a little shorter.
public class CleanupInputStreamResource extends InputStreamResource {
public CleanupInputStreamResource(File file) throws FileNotFoundException {
super(new FileInputStream(file) {
#Override
public void close() throws IOException {
super.close();
Files.delete(file.toPath());
}
});
}
}
You can write Mag's solution with anonymous classes like this:
new FileSystemResource(file) {
#Override
public InputStream getInputStream() throws IOException {
return new FileInputStream(file) {
#Override
public void close() throws IOException {
super.close();
Files.delete(file.toPath());
}
};
}
}
Used this answer and added some modification. Working so far. Due to my limited knowledge I couldn't create dynamic proxy for my custom inputStream.
import static org.apache.commons.io.FileUtils.deleteQuietly;
import java.io.File;
import java.io.IOException;
import java.io.InputStream;
import java.nio.file.Path;
import org.springframework.core.io.FileSystemResource;
import lombok.RequiredArgsConstructor;
public final class AutoDeleteFileSystemResource extends FileSystemResource {
public AutoDeleteFileSystemResource(Path filePath) {
super(filePath);
}
#RequiredArgsConstructor
private static final class AutoDeleteStream extends InputStream {
private final File file;
private final InputStream original;
#Override
public int read() throws IOException {
return original.read();
}
#Override
public void close() throws IOException {
original.close();
deleteQuietly(file);
}
#Override
public int available() throws IOException {
return original.available();
}
#Override
public int read(byte[] b) throws IOException {
return original.read(b);
}
#Override
public int read(byte[] b, int off, int len) throws IOException {
return original.read(b, off, len);
}
#Override
public long skip(long n) throws IOException {
return original.skip(n);
}
#Override
public boolean equals(Object obj) {
return original.equals(obj);
}
#Override
public int hashCode() {
return original.hashCode();
}
#Override
public synchronized void mark(int readlimit) {
original.mark(readlimit);
}
#Override
public boolean markSupported() {
return original.markSupported();
}
#Override
public synchronized void reset() throws IOException {
original.reset();
}
#Override
public String toString() {
return original.toString();
}
}
/**
* #see org.springframework.core.io.FileSystemResource#getInputStream()
*/
#Override
public InputStream getInputStream() throws IOException {
return new AutoDeleteStream(getFile(), super.getInputStream());
}
}
Related
I am trying to use Apache Flink write parquet file on HDFS by using BucketingSink and a custom ParquetSinkWriter.
Here is the code and above error indicate when enable checking point (call snapshotState() in BucketingSink Class) flush method from below is not quiet working. Even writer is closed with "writer.close();" but still got error from "writer = createWriter();". Any thoughts? thanks
Got error like this:
org.apache.hadoop.fs.FileAlreadyExistsException:
/user/hive/flink_parquet_fils_with_checkingpoint/year=20/month=2/day=1/hour=17/_part-4-9.in-progress
for client 192.168.56.202 already exists
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:3003)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2890)
....
.
at flink.untils.ParquetSinkWriter.flush(ParquetSinkWriterForecast.java:81)
at
org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.snapshotState(BucketingSink.java:749)
import org.apache.flink.util.Preconditions;
import org.apache.avro.Schema;
import org.apache.avro.generic.GenericData;
import org.apache.avro.generic.GenericRecord;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.parquet.avro.AvroParquetWriter;
import org.apache.parquet.hadoop.ParquetWriter;
import org.apache.parquet.hadoop.metadata.CompressionCodecName;
import java.io.IOException;
/**
* Parquet writer.
*
* #param <T>
*/
public class ParquetSinkWriter<T extends GenericRecord> implements Writer<T> {
private static final long serialVersionUID = -975302556515811398L;
private final CompressionCodecName compressionCodecName = CompressionCodecName.SNAPPY;
private final int pageSize = 64 * 1024;
private final String schemaRepresentation;
private transient Schema schema;
private transient ParquetWriter<GenericRecord> writer;
private transient Path path;
private int position;
public ParquetSinkWriter(String schemaRepresentation) {
this.schemaRepresentation = Preconditions.checkNotNull(schemaRepresentation);
}
#Override
public void open(FileSystem fs, Path path) throws IOException {
this.position = 0;
this.path = path;
if (writer != null) {
writer.close();
}
writer = createWriter();
}
#Override
public long flush() throws IOException {
Preconditions.checkNotNull(writer);
position += writer.getDataSize();
writer.close();
writer = createWriter();
return position;
}
#Override
public long getPos() throws IOException {
Preconditions.checkNotNull(writer);
return position + writer.getDataSize();
}
#Override
public void close() throws IOException {
if (writer != null) {
writer.close();
writer = null;
}
}
#Override
public void write(T element) throws IOException {
Preconditions.checkNotNull(writer);
writer.write(element);
}
#Override
public Writer<T> duplicate() {
return new ParquetSinkWriter<>(schemaRepresentation);
}
private ParquetWriter<GenericRecord> createWriter() throws IOException {
if (schema == null) {
schema = new Schema.Parser().parse(schemaRepresentation);
}
return AvroParquetWriter.<GenericRecord>builder(path)
.withSchema(schema)
.withDataModel(new GenericData())
.withCompressionCodec(compressionCodecName)
.withPageSize(pageSize)
.build();
}
}
It seems that the file You are trying to create currently exists. This is because You are using the default write mode CREATE, which fails when the file exists. What You can try to do is change Your code to use the OVERWRITE mode. You can change the createWriter() method to return something like below:
return AvroParquetWriter.<GenericRecord>builder(path)
.withSchema(schema)
.withDataModel(new GenericData())
.withCompressionCodec(compressionCodecName)
.withPageSize(pageSize)
.withWriteMode(ParquetFileWriter.Mode.OVERWRITE)
.build();
I'm looking for magical Java class that will allow me to do something like this:
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
FileOutputStream fileStream = new FileOutputStream(new File("/tmp/somefile"));
MultiOutputStream outStream = new MultiOutputStream(byteStream, fileStream);
outStream.write("Hello world".getBytes());
Basically, I want tee for OutputStreams in Java. Any ideas?
Thanks!
Try the Apache Commons TeeOutputStream.
Just roll your own. There isn't any magic at all. Using Apache's TeeOutputStream you would basically use the code below. Of course using the Apache Commons I/O library you can leverage other classes, but sometimes it is nice to actually write something for yourself. :)
public final class TeeOutputStream extends OutputStream {
private final OutputStream out;
private final OutputStream tee;
public TeeOutputStream(OutputStream out, OutputStream tee) {
if (out == null)
throw new NullPointerException();
else if (tee == null)
throw new NullPointerException();
this.out = out;
this.tee = tee;
}
#Override
public void write(int b) throws IOException {
out.write(b);
tee.write(b);
}
#Override
public void write(byte[] b) throws IOException {
out.write(b);
tee.write(b);
}
#Override
public void write(byte[] b, int off, int len) throws IOException {
out.write(b, off, len);
tee.write(b, off, len);
}
#Override
public void flush() throws IOException {
out.flush();
tee.flush();
}
#Override
public void close() throws IOException {
try {
out.close();
} finally {
tee.close();
}
}
}
Testing with the above class with the following
public static void main(String[] args) throws IOException {
TeeOutputStream out = new TeeOutputStream(System.out, System.out);
out.write("Hello world!".getBytes());
out.flush();
out.close();
}
would print Hello World!Hello World!.
(Note: the overridden close() could use some care tho' :)
Just found this thread beacause I had to face the same problem.
If someone wants to see my solution (java7 code):
package Core;
import java.io.IOException;
import java.io.OutputStream;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
public class MultiOutputStream extends OutputStream {
private List<OutputStream> out;
public MultiOutputStream(List<OutputStream> outStreams) {
this.out = new LinkedList<OutputStream>();
for (Iterator<OutputStream> i = outStreams.iterator(); i.hasNext();) {
OutputStream outputStream = (OutputStream) i.next();
if(outputStream == null){
throw new NullPointerException();
}
this.out.add(outputStream);
}
}
#Override
public void write(int arg0) throws IOException {
for (Iterator<OutputStream> i = out.iterator(); i.hasNext();) {
OutputStream var = (OutputStream) i.next();
var.write(arg0);
}
}
#Override
public void write(byte[] b) throws IOException{
for (Iterator<OutputStream> i = out.iterator(); i.hasNext();) {
OutputStream var = (OutputStream) i.next();
var.write(b);
}
}
#Override
public void write(byte[] b, int off, int len) throws IOException{
for (Iterator<OutputStream> i = out.iterator(); i.hasNext();) {
OutputStream var = (OutputStream) i.next();
var.write(b, off, len);
}
}
#Override
public void close() throws IOException{
for (Iterator<OutputStream> i = out.iterator(); i.hasNext();) {
OutputStream var = (OutputStream) i.next();
var.close();
}
}
#Override
public void flush() throws IOException{
for (Iterator<OutputStream> i = out.iterator(); i.hasNext();) {
OutputStream var = (OutputStream) i.next();
var.flush();
}
}
}
Works fine so far, just tested some basic operation, e.g. setting up a MultiOutputStream from the System.out Stream and 2 PrintStreams each writing into a seperate log.
I used
System.setOut(multiOutputStream);
to write to my terminal screen and two logs which worked without any problems.
final ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
final FileOutputStream fileStream = new FileOutputStream(new File("/tmp/somefile"));
OutputStream outStream = new OutputStream() {
public void write(int b) throws IOException {
byteStream.write(b);
fileStream.write(b);
}
};
outStream.write("Hello world".getBytes());
Roll your own, it's basically trivial. Use an ArrayList<OutputStream> or whatever's popular nowadays to store all the streams you want and write the write method to loop over all of them, writing to each.
I'm trying to write code that starts with a Path object that is for a specific file and makes it so the owner of the file no longer has permissions to move it, delete it, or modify it, but they can still read it. Also need to make sure that this can be undone, as well as maintain unrestricted access for administrators at all times.
One of my main problems is that I can't figure out how to get the user profile names and group names that are in the system.
An in-depth explanation would be fantastic.
The File class can set a file for reading, writing, and executing. To get it by user, you need to use NIO.
The Files class and NIO uses PosixFilePermissions, which sets permissions by file and by three groups: User, owner, and Group. On Windows, Administrators will be in a group, as well as System.
To move it we need our own SecurityManager. When moving files, NIO uses the write permission. So our SecurityManager has to modify the write permission. See my code below as an example.
p.s. although the FileSystemsProvider is WindowsFileSystemProvider, this is returned by FileSystemProviders.getProvider (or the method similar). Its possible that the rt.jar is different for each OS downloaded on, but if you're on Windows you can assume this is correct.
PathRestrictor.java
package Testers;
import java.io.IOException;
import java.net.URI;
import java.nio.channels.SeekableByteChannel;
import java.nio.file.AccessMode;
import java.nio.file.CopyOption;
import java.nio.file.DirectoryStream;
import java.nio.file.DirectoryStream.Filter;
import java.nio.file.FileStore;
import java.nio.file.FileSystem;
import java.nio.file.Files;
import java.nio.file.LinkOption;
import java.nio.file.OpenOption;
import java.nio.file.Path;
import java.nio.file.attribute.BasicFileAttributes;
import java.nio.file.attribute.FileAttribute;
import java.nio.file.attribute.FileAttributeView;
import java.nio.file.attribute.PosixFilePermission;
import java.nio.file.spi.FileSystemProvider;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
import sun.nio.fs.WindowsFileSystemProvider;
public class PathRestrictor extends FileSystemProvider{
boolean canRead;
boolean canWrite;
boolean canMove; //This is the tricky one
boolean canOpen;
private Path path;
WindowsFileSystemProvider provider = new WindowsFileSystemProvider();
public PathRestrictor(Path p){
path = p;
canRead = true;
canWrite = true;
canOpen = true;
}
public void setExecuteable(boolean executable){
canOpen = executable;
try {
Files.setPosixFilePermissions(path, getPerms());
} catch (IOException e) {
e.printStackTrace();
}
}
public void setReadable(boolean readable){
canRead = readable;
try {
Files.setPosixFilePermissions(path, getPerms());
} catch (IOException e) {
e.printStackTrace();
}
}
public void setWriteable(boolean writeable){
canWrite = writeable;
try {
Files.setPosixFilePermissions(path, getPerms());
} catch (IOException e) {
e.printStackTrace();
}
}
public void setMoveable(boolean moveable){
canMove = moveable;
MovementSecurityManager manager = new MovementSecurityManager();
if(!moveable)manager.unMoveablePaths.add(path.toString());
else manager.unMoveablePaths.remove(path.toString());
System.setSecurityManager(manager);
}
private Set<PosixFilePermission> getPerms() {
Set<PosixFilePermission> perms = new HashSet<PosixFilePermission>();
perms.add(PosixFilePermission.GROUP_EXECUTE);
perms.add(PosixFilePermission.GROUP_READ);
perms.add(PosixFilePermission.GROUP_WRITE);
if(canRead){
perms.add(PosixFilePermission.OTHERS_READ);
perms.add(PosixFilePermission.OWNER_READ);
}
if(canWrite){
perms.add(PosixFilePermission.OTHERS_WRITE);
perms.add(PosixFilePermission.OWNER_WRITE);
}
if(canOpen){
perms.add(PosixFilePermission.OTHERS_EXECUTE);
perms.add(PosixFilePermission.OWNER_EXECUTE);
}
return perms;
}
#Override
public void checkAccess(Path path, AccessMode... modes) throws IOException {
provider.checkAccess(path, modes);
}
#Override
public void copy(Path source, Path target, CopyOption... options)
throws IOException {
// TODO Auto-generated method stub
provider.copy(source, target, options);
}
#Override
public void createDirectory(Path dir, FileAttribute<?>... attrs)
throws IOException {
provider.createDirectory(dir, attrs);
}
#Override
public void delete(Path path) throws IOException {
provider.delete(path);
}
#Override
public <V extends FileAttributeView> V getFileAttributeView(Path path,
java.lang.Class<V> type, LinkOption... options) {
return provider.getFileAttributeView(path, type, options);
}
#Override
public FileStore getFileStore(Path path) throws IOException {
return provider.getFileStore(path);
}
#Override
public FileSystem getFileSystem(URI uri) {
return provider.getFileSystem(uri);
}
#Override
public Path getPath(URI uri) {
return provider.getPath(uri);
}
#Override
public String getScheme() {
return provider.getScheme();
}
#Override
public boolean isHidden(Path path) throws IOException {
return provider.isHidden(path);
}
#Override
public boolean isSameFile(Path path, Path path2) throws IOException {
return path.toString().equals(path2.toString());
}
#Override
public void move(Path source, Path target, CopyOption... options)
throws IOException {
MovementSecurityManager manager = new MovementSecurityManager();
manager.isMoving = true;
System.setSecurityManager(manager);
provider.move(source, target, options);
}
#Override
public SeekableByteChannel newByteChannel(Path path,
Set<? extends OpenOption> options, FileAttribute<?>... attrs)
throws IOException {
return provider.newByteChannel(path, options, attrs);
}
#Override
public DirectoryStream<Path> newDirectoryStream(Path dir,
Filter<? super Path> filter) throws IOException {
return provider.newDirectoryStream(dir, filter);
}
#Override
public FileSystem newFileSystem(URI uri, Map<String, ?> env)
throws IOException {
return provider.newFileSystem(uri, env);
}
#Override
public <A extends BasicFileAttributes> A readAttributes(Path path,
java.lang.Class<A> type, LinkOption... options) throws IOException {
return provider.readAttributes(path, type, options);
}
#Override
public Map<String, Object> readAttributes(Path path, String attributes,
LinkOption... options) throws IOException {
return provider.readAttributes(path, attributes, options);
}
#Override
public void setAttribute(Path path, String attribute, Object value,
LinkOption... options) throws IOException {
provider.setAttribute(path, attribute, value, options);
}
}
MovementSecurityManager.java
package Testers;
import java.util.HashSet;
import java.util.Set;
public class MovementSecurityManager extends SecurityManager {
public Set<String> unMoveablePaths = new HashSet<String>();
public boolean isMoving = true;
public void checkWrite(String path){
if(unMoveablePaths.contains(path) && isMoving) throw new SecurityException("Cannot move file!");
else super.checkWrite(path);
}
}
I'm making a REST api for my Java database-like service using Vert.x.
It's not too dificult to write the JSON result as a String to the request's stream, as shown below:
...
routeMatcher.get("/myservice/api/v1/query/:query", req -> {
// get query
String querySring = req.params().get("query");
Query query = jsonMapper.readValue(querySring, Query.class);
// my service creates a list of resulting records...
List<Record> result = myservice.query(query);
String jsonResult = jsonMapper.writeValueAsString(result);
// write entire string to response
req.response().headers().set("Content-Type", "application/json; charset=UTF-8");
req.response().end(jsonResult);
});
...
However I'd like to stream the Java List to the request object by using Jackson's method:
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.writeValue(Outputstream, result);
But I don't know how to connect Jackson's Outputstream argument to Vert.x's re.response(), as they have their own Buffer system that seems incompatible with Jackson's java.io.Outputstream argument.
Can't I use Jackson in combination with Vert.x? Should I write a custom serializer by hand with Vert.x's own JSON library? Other suggestions?
I assume you are generating huge JSON documents as for the small ones string output is good enough: objectMapper.writeValue(<String>, result);
There's a problem with streams. ObjectMapper doesn't know the result size and you will end up with the exception:
java.lang.IllegalStateException: You must set the Content-Length header to be the total size of the message body BEFORE sending any data if you are not using HTTP chunked encoding.
at org.vertx.java.core.http.impl.DefaultHttpServerResponse.write(DefaultHttpServerResponse.java:474)
So in your example I would use temporary files for JSON output and then flush them into response (I haven't tested the code)
File tmpFile = File.createTempFile("tmp", ".json");
mapper.writeValue(tmpFile, result);
req.response().sendFile(tmpFile.getAbsolutePath(), (result) -> tmpFile.delete());
In case you know content length initially you can use the following code to map OutputStream with WriteStream
import org.vertx.java.core.buffer.Buffer;
import org.vertx.java.core.streams.WriteStream;
import java.io.IOException;
import java.io.OutputStream;
public class OutputWriterStream extends OutputStream {
public WriteStream writeStream;
public Runnable closeHandler;
#Override
public void write(int b) throws IOException {
throw new UnsupportedOperationException();
}
#Override
public void write(byte[] b, int off, int len) throws IOException {
if (off == 0 && len == b.length) {
writeStream.write(new Buffer(b));
return;
}
byte[] bytes = new byte[len];
System.arraycopy(b, off, bytes, 0, len);
writeStream.write(new Buffer(bytes));
}
#Override
public void write(byte[] b) throws IOException {
writeStream.write(new Buffer(b));
}
#Override
public void close() throws IOException {
closeHandler.run();
}
}
This might be a bit better (and updated for Vertx3) answer:
import io.vertx.core.file.AsyncFile;
import io.vertx.core.buffer.Buffer;
import io.vertx.core.http.HttpServerResponse;
import io.vertx.core.streams.WriteStream;
import java.io.IOException;
import java.io.OutputStream;
public class OutputWriterStream extends OutputStream {
public OutputWriterStream(final WriteStream response) {
this.response = response;
this.buffer = new byte[8192];
}
#Override
public synchronized void write(final int b) throws IOException {
buffer[counter++] = (byte) b;
if (counter >= buffer.length) {
flush();
}
}
#Override
public void flush() throws IOException {
super.flush();
if (counter > 0) {
byte[] remaining = buffer;
if (counter < buffer.length) {
remaining = new byte[counter];
System.arraycopy(buffer, 0, remaining, 0, counter);
}
response.write(Buffer.buffer(remaining));
counter = 0;
}
}
#Override
public void close() throws IOException {
flush();
super.close();
if (response instanceof HttpServerResponse) {
try {
response.end();
}
catch (final IllegalStateException ignore) {
}
}
else if (response instanceof AsyncFile) {
((AsyncFile) response).close();
}
}
private final WriteStream<Buffer> response;
private final byte[] buffer;
private int counter = 0;
}
I'm looking for magical Java class that will allow me to do something like this:
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
FileOutputStream fileStream = new FileOutputStream(new File("/tmp/somefile"));
MultiOutputStream outStream = new MultiOutputStream(byteStream, fileStream);
outStream.write("Hello world".getBytes());
Basically, I want tee for OutputStreams in Java. Any ideas?
Thanks!
Try the Apache Commons TeeOutputStream.
Just roll your own. There isn't any magic at all. Using Apache's TeeOutputStream you would basically use the code below. Of course using the Apache Commons I/O library you can leverage other classes, but sometimes it is nice to actually write something for yourself. :)
public final class TeeOutputStream extends OutputStream {
private final OutputStream out;
private final OutputStream tee;
public TeeOutputStream(OutputStream out, OutputStream tee) {
if (out == null)
throw new NullPointerException();
else if (tee == null)
throw new NullPointerException();
this.out = out;
this.tee = tee;
}
#Override
public void write(int b) throws IOException {
out.write(b);
tee.write(b);
}
#Override
public void write(byte[] b) throws IOException {
out.write(b);
tee.write(b);
}
#Override
public void write(byte[] b, int off, int len) throws IOException {
out.write(b, off, len);
tee.write(b, off, len);
}
#Override
public void flush() throws IOException {
out.flush();
tee.flush();
}
#Override
public void close() throws IOException {
try {
out.close();
} finally {
tee.close();
}
}
}
Testing with the above class with the following
public static void main(String[] args) throws IOException {
TeeOutputStream out = new TeeOutputStream(System.out, System.out);
out.write("Hello world!".getBytes());
out.flush();
out.close();
}
would print Hello World!Hello World!.
(Note: the overridden close() could use some care tho' :)
Just found this thread beacause I had to face the same problem.
If someone wants to see my solution (java7 code):
package Core;
import java.io.IOException;
import java.io.OutputStream;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
public class MultiOutputStream extends OutputStream {
private List<OutputStream> out;
public MultiOutputStream(List<OutputStream> outStreams) {
this.out = new LinkedList<OutputStream>();
for (Iterator<OutputStream> i = outStreams.iterator(); i.hasNext();) {
OutputStream outputStream = (OutputStream) i.next();
if(outputStream == null){
throw new NullPointerException();
}
this.out.add(outputStream);
}
}
#Override
public void write(int arg0) throws IOException {
for (Iterator<OutputStream> i = out.iterator(); i.hasNext();) {
OutputStream var = (OutputStream) i.next();
var.write(arg0);
}
}
#Override
public void write(byte[] b) throws IOException{
for (Iterator<OutputStream> i = out.iterator(); i.hasNext();) {
OutputStream var = (OutputStream) i.next();
var.write(b);
}
}
#Override
public void write(byte[] b, int off, int len) throws IOException{
for (Iterator<OutputStream> i = out.iterator(); i.hasNext();) {
OutputStream var = (OutputStream) i.next();
var.write(b, off, len);
}
}
#Override
public void close() throws IOException{
for (Iterator<OutputStream> i = out.iterator(); i.hasNext();) {
OutputStream var = (OutputStream) i.next();
var.close();
}
}
#Override
public void flush() throws IOException{
for (Iterator<OutputStream> i = out.iterator(); i.hasNext();) {
OutputStream var = (OutputStream) i.next();
var.flush();
}
}
}
Works fine so far, just tested some basic operation, e.g. setting up a MultiOutputStream from the System.out Stream and 2 PrintStreams each writing into a seperate log.
I used
System.setOut(multiOutputStream);
to write to my terminal screen and two logs which worked without any problems.
final ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
final FileOutputStream fileStream = new FileOutputStream(new File("/tmp/somefile"));
OutputStream outStream = new OutputStream() {
public void write(int b) throws IOException {
byteStream.write(b);
fileStream.write(b);
}
};
outStream.write("Hello world".getBytes());
Roll your own, it's basically trivial. Use an ArrayList<OutputStream> or whatever's popular nowadays to store all the streams you want and write the write method to loop over all of them, writing to each.