Following the question I asked before: I am implementing an ByteArrayOutputStream having capacity restriction. My main limitation is an amount of available memory. So having such stream os:
When I write more than say 1MB to the output stream I need to "stop".
I prefer not throw exception but write the complete contents of os
output stream to the specified other output stream argument.
OutputStream out;
os.writeTo(out);
And after that continue the writings to os from its beginning
In order to prevent the situation described at 1. , I prefer to drain os,
as freuqntely as possible. I mean copy the data from it to out in chuncks
of 512KB
Is it feasible? If yes any advices how can it be done? Or may be there is a built in class which answers my requirements
Edit: The amount of bytes written to out is also limited. I can write there up to 1GB. If I have more, I need to create other output stream in order to drain from os there.
The proccess of writing to os. can be like that. 500MB was written there - I transfer it immidiately to out. After several seconds 700MB were written there - I need to drain only 500MB to out and other 200MB to other outputstream(out2), which I`ll need to create upon such situation
What you are describing is a BufferedOutputStream, which you can construct like that :
new BufferedOutputStream(out, 512000)
The first arg is the other outputstream you have and the second one is the size of the BufferedOutputStream internal buffer
EDIT:
ok, i did not fully understand your need at first. You will indeed need to extend OutputStream to achieve that. Here is a sample code :
Here is how to use the below code :
public static void main(String[] args) throws IOException {
AtomicLong idx = new AtomicLong(0);
try (
OutputStream out = new OutputStreamMultiVolume(10, () -> new FileOutputStream(getNextFilename(idx)));
) {
out.write("01234567890123456789012345678901234567890123456789".getBytes("UTF-8"));
}
}
private static File getNextFilename(AtomicLong idx) {
return new File("sample.file." + idx.incrementAndGet() + ".txt");
}
The first constructor arg of OutputStreamMultiVolume is the max size of a volume. If we reach this size, we will close the current outputStream, and call the OutputStreamSupplier to get the next one.
The example code here will write the String 01234567890123456789012345678901234567890123456789 (5 times 0123456789) to files named 'sample.file.idx.txt' where idx is increased each time we reach the outstream max size (so you'll get 5 files).
and the class intself :
public class OutputStreamMultiVolume extends OutputStream {
private final long maxBytePerVolume;
private long bytesInCurrentVolume = 0;
private OutputStream out;
private OutputStreamSupplier outputStreamSupplier;
static interface OutputStreamSupplier {
OutputStream get() throws IOException;
}
public OutputStreamMultiVolume(long maxBytePerOutput, OutputStreamSupplier outputStreamSupplier) throws IOException {
this.outputStreamSupplier = outputStreamSupplier;
this.maxBytePerVolume = maxBytePerOutput;
this.out = outputStreamSupplier.get();
}
#Override
public synchronized void write(byte[] bytes) throws IOException {
final int remainingBytesInVol = (int) (maxBytePerVolume - bytesInCurrentVolume);
if (remainingBytesInVol >= bytes.length) {
out.write(bytes);
bytesInCurrentVolume += bytes.length;
return;
}
out.write(bytes, 0, remainingBytesInVol);
switchOutput();
this.write(bytes, remainingBytesInVol, bytes.length - remainingBytesInVol);
}
#Override
public synchronized void write(int b) throws IOException {
if (bytesInCurrentVolume + 1 <= maxBytePerVolume) {
out.write(b);
bytesInCurrentVolume += 1;
return;
}
switchOutput();
out.write(b);
bytesInCurrentVolume += 1;
}
#Override
public synchronized void write(byte[] b, int off, int len) throws IOException {
final int remainingBytesInVol = (int) (maxBytePerVolume - bytesInCurrentVolume);
if (remainingBytesInVol >= len) {
out.write(b, off, len);
bytesInCurrentVolume += len;
return;
}
out.write(b, off, remainingBytesInVol);
switchOutput();
this.write(b, off + remainingBytesInVol, len - remainingBytesInVol);
bytesInCurrentVolume += len - remainingBytesInVol;
}
private void switchOutput() throws IOException {
out.flush();
out.close();
out = outputStreamSupplier.get();
bytesInCurrentVolume = 0;
}
#Override
public synchronized void close() throws IOException {
out.close();
}
#Override
public synchronized void flush() throws IOException {
out.flush();
}
}
I'm afraid that your original question was not fully explained, and so were not the answers you got.
You should not use nor extend BytArrayOutputStream for flushing, because its main feature is to "write data into a byte array": i.e.: all the data is in memory, so you can retrieve it at later through toByteArray.
If you want to flush your exceding data, you need a buffered aproach: It is enough with this construction:
OutputStream out=new FileOutputStream(...);
out=new BufferedOutputStream(out, 1024*1024);
In order to flush the data periodically, you can schedule a TimerTask to invoke flush:
Timer timer=new Timer(true);
TimerTask timerTask=new TimerTask(){
public void run()
{
try
{
out.flush();
}
catch (IOException e)
{
...
}
};
timer.schedule(timerTask, delay, period);
I guess you could try using a java.nio.ByteBuffer in combination with a java.nio.channel.Channels that has a method newChannel(OutputStream);
Like so:
ByteBuffer buffer = ByteBuffer.allocate(1024 * 1024);
//... use buffer
OutputStream out = ...
drainBuffer(buffer, out);
and
public void drainBuffer(ByteBuffer buffer, OutputStream stream) {
WritableByteChannel channel = Channels.newChannel(stream);
channel.write(buffer);
}
Related
I want to play streaming media, received from a internet service. The media player works fine, but is sometimes interrupted due to poor download rate.
On receiving of media data I run a thread that does decoding and other manipulations, the abstract code looks like that:
private void startConsuming(final InputStream input) {
consumingThread = new Thread() {
public void run() {
runConsumingThread(input);
}
};
consumingThread.start();
}
My idea is to calculate the buffer size needed to prevent interruption, and to start media playback once the buffer is filled (or, of cause, if the stream ends).
private void startConsuming(final InputStream input) {
consumingThread = new Thread() {
public void run() {
runConsumingThread(input);
}
};
Thread fillBufferThread = new Thread() {
public void run() {
try {
while(input.available() < RECEIVING_BUFFER_SIZE_BYTES) {
log.debug("available bytes: " + input.available());
sleep(20);
}
} catch (Exception ex) {
// ignore
}
consumingThread.start();
}
};
fillBufferThread.start();
}
In debug I get continuously "available bytes: 0" while stream arrives and does not break the while loop. I recognized already, that EOFException will of cause not occur, since I do not read from InputStream.
How can I handle this? I thought that input.available() would increase on data arrival.
Why can runConsumingThread(input) work correctly in nearly the same manner, but my while loop in fillBufferThread does not?
EDIT: Following code nearly works (except that it wrongly consumes the input stream, which is then not played in consumingThread, but that will be easy to solve), but there must be a smarter solution.
[...]
Thread fillBufferThread = new Thread() {
public void run() {
final DataInputStream dataInput = new DataInputStream(input);
try {
int bufferSize = 0;
byte[] localBuffer = new byte[RECEIVING_BUFFER_SIZE_BYTES];
while(bufferSize < RECEIVING_BUFFER_SIZE_BYTES) {
int len = dataInput.readInt();
if(len > localBuffer.length){
if (D) log.debug("increasing buffer length: " + len);
localBuffer = new byte[len];
}
bufferSize += len;
log.debug("available bytes: " + bufferSize);
dataInput.readFully(localBuffer, 0, len);
}
consumingThread.start();
}
};
[...]
It can't be efficient to read from stream until I know, that I have it filled with a number of bytes, or is it?
Is there an implemetation of GZIPOutputStream that would do the heavy lifting (compressing + writing to disk) in a separate thread?
We are continuously writing huge amounts of GZIP-compressed data. I am looking for a drop-in replacement that could be used instead of GZIPOutputStream.
You can write to a PipedOutputStream and have a thread which reads the PipedInputStream and copies it to any stream you like.
This is a generic implementation. You give it an OutputStream to write to and it returns an OutputStream for you to write to.
public static OutputStream asyncOutputStream(final OutputStream out) throws IOException {
PipedOutputStream pos = new PipedOutputStream();
final PipedInputStream pis = new PipedInputStream(pos);
new Thread(new Runnable() {
#Override
public void run() {
try {
byte[] bytes = new byte[8192];
for(int len; (len = pis.read(bytes)) > 0;)
out.write(bytes, 0, len);
} catch(IOException ioe) {
ioe.printStackTrace();
} finally {
close(pis);
close(out);
}
}
}, "async-output-stream").start();
return pos;
}
static void close(Closeable closeable) {
if (closeable != null) try {
closeable.close();
} catch (IOException ignored) {
}
}
I published some code that does exactly what you are looking for. It has always frustrated me that Java doesn't automatically pipeline calls like this across multiple threads, in order to overlap computation, compression, and disk I/O:
https://github.com/lukehutch/PipelinedOutputStream
This class splits writing to an OutputStream into separate producer and consumer threads (actually, starts a new thread for the consumer), and inserts a blocking bounded buffer between them. There is some data copying between buffers, but this is done as efficiently as possible.
You can even layer this twice to do the disk writing in a separate thread from the gzip compression, as shown in README.md.
This is my code, I'm using rxtx.
public void Send(byte[] bytDatos) throws IOException {
this.out.write(bytDatos);
}
public byte[] Read() throws IOException {
byte[] buffer = new byte[1024];
int len = 20;
while(in.available()!=0){
in.read(buffer);
}
System.out.print(new String(buffer, 0, len) + "\n");
return buffer;
}
the rest of code is just the same as this, i just changed 2 things.
InputStream in = serialPort.getInputStream();
OutputStream out = serialPort.getOutputStream();
They are global variables now and...
(new Thread(new SerialReader(in))).start();
(new Thread(new SerialWriter(out))).start();
not exist now...
I'm sending this (each second)
Send(("123456789").getBytes());
And this is what i got:
123456789123
456789
123456789
1234567891
23456789
can anybody help me?
EDIT
Later, i got the better way to solve it. Thanks, this was the Read Code
public byte[] Read(int intEspera) throws IOException {
try {
Thread.sleep(intEspera);
} catch (InterruptedException ex) {
Logger.getLogger(COM_ClComunica.class.getName()).log(Level.SEVERE, null, ex);
}//*/
byte[] buffer = new byte[528];
int len = 0;
while (in.available() > 0) {
len = in.available();
in.read(buffer,0,528);
}
return buffer;
}
It was imposible for me to erase that sleep but it is not a problem so, thanks veer
You should indeed note that InputStream.available is defined as follows...
Returns an estimate of the number of bytes that can be read (or skipped over) from this input stream without blocking by the next invocation of a method for this input stream. The next invocation might be the same thread or another thread. A single read or skip of this many bytes will not block, but may read or skip fewer bytes.
As you can see, this is not what you expected. Instead, you want to check for end-of-stream, which is indicated by InputStream.read() returning -1.
In addition, since you don't remember how much data you have already read in prior iterations of your read loop, you are potentially overwriting prior data in your buffer, which is again not something you likely intended.
What you appear to want is something as follows:
private static final int MESSAGE_SIZE = 20;
public byte[] read() throws IOException {
final byte[] buffer = new byte[MESSAGE_SIZE];
int total = 0;
int read = 0;
while (total < MESSAGE_SIZE
&& (read = in.read(buffer, total, MESSAGE_SIZE - total)) >= 0) {
total += read;
}
return buffer;
}
This should force it to read up to 20 bytes, less in the case of reaching the end of the stream.
Special thanks to EJP for reminding me to maintain the quality of my posts and make sure they're correct.
Get rid of the available() test. All it is doing is telling you whether there is data ready to be read without blocking. That isn't the same thing as telling you where an entire message ends. There are few correct uses for available(), and this isn't one of them.
And advance the buffer pointer when you read. You need to keep track of how many bytes you have read so far, and use that as the 2nd parameter to read(), with buffer.length as the third parameter.
This question already has answers here:
Easy way to write contents of a Java InputStream to an OutputStream
(24 answers)
Closed 6 years ago.
I was to trying to find the best way to pipe the InputStream to OutputStream. I don't have an option to use any other libraries like Apache IO. Here is the snippet and output.
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.channels.FileChannel;
public class Pipe {
public static void main(String[] args) throws Exception {
for(PipeTestCase testCase : testCases) {
System.out.println(testCase.getApproach());
InputStream is = new FileInputStream("D:\\in\\lft_.txt");
OutputStream os = new FileOutputStream("D:\\in\\out.txt");
long start = System.currentTimeMillis();
testCase.pipe(is, os);
long end = System.currentTimeMillis();
System.out.println("Execution Time = " + (end - start) + " millis");
System.out.println("============================================");
is.close();
os.close();
}
}
private static PipeTestCase[] testCases = {
new PipeTestCase("Fixed Buffer Read") {
#Override
public void pipe(InputStream is, OutputStream os) throws IOException {
byte[] buffer = new byte[1024];
while(is.read(buffer) > -1) {
os.write(buffer);
}
}
},
new PipeTestCase("dynamic Buffer Read") {
#Override
public void pipe(InputStream is, OutputStream os) throws IOException {
byte[] buffer = new byte[is.available()];
while(is.read(buffer) > -1) {
os.write(buffer);
buffer = new byte[is.available() + 1];
}
}
},
new PipeTestCase("Byte Read") {
#Override
public void pipe(InputStream is, OutputStream os) throws IOException {
int c;
while((c = is.read()) > -1) {
os.write(c);
}
}
},
new PipeTestCase("NIO Read") {
#Override
public void pipe(InputStream is, OutputStream os) throws IOException {
FileChannel source = ((FileInputStream) is).getChannel();
FileChannel destnation = ((FileOutputStream) os).getChannel();
destnation.transferFrom(source, 0, source.size());
}
},
};
}
abstract class PipeTestCase {
private String approach;
public PipeTestCase( final String approach) {
this.approach = approach;
}
public String getApproach() {
return approach;
}
public abstract void pipe(InputStream is, OutputStream os) throws IOException;
}
Output (~4MB input file) :
Fixed Buffer Read
Execution Time = 71 millis
============================================
dynamic Buffer Read
Execution Time = 167 millis
============================================
Byte Read
Execution Time = 29124 millis
============================================
NIO Read
Execution Time = 125 millis
============================================
'Dynamic Buffer Read' uses available() method. But it is not reliable as per java docs
It is never correct to use the return value of this method to allocate
a buffer intended to hold all data in this stream.
'Byte Read' seems to be very slow.
So 'Fixed Buffer Read' is the best option for pipe? Any thoughts?
Java 9
Since Java 9 one can use this method from InputStream:
public long transferTo(OutputStream out) throws IOException
Pre Java 9
A one-liner from apache commons:
IOUtils.copy(inputStream, outputStream);
Documentation here. There are multiple copy methods with different parameters. It is also possible to specify the buffer size.
I came across this, and the final read can cause problems.
SUGGESTED CHANGE:
public void pipe(InputStream is, OutputStream os) throws IOException {
int n;
byte[] buffer = new byte[1024];
while((n = is.read(buffer)) > -1) {
os.write(buffer, 0, n); // Don't allow any extra bytes to creep in, final write
}
os.close ();
I also agree that 16384 is probably a better fixed buffer size than 1024.
IMHO...
I would say a fixed buffer size is the best/easiest to understand. However there are a few problems.
You're writing the entire buffer to the output stream each time. For the final block the read may have read < 1024 bytes so you need to take this into account when doing the write (basically only write number of bytes returned by read()
In the dynamic buffer case you use available(). This is not a terribly reliable API call. I'm not sure in this case inside a loop whether it will be ok, but I wouldn't be suprised if it was implemented sub-optimally in some implementations of InputStream.
The last case you are casting to FileInputStream. If you intend for this to be general purpose then you can't use this approach.
java.io contains PipedInputStream and PipedOutputStream
PipedInputStream input = new PipedInputStream();
PipedOutputStream output = new PipedOutputStream (input);
write to input and it will be visible in output as an Outputstream. Things can work the other way around as well
I have created the normal publishers and subscribers implemented using java , which works as reading the contents by size as 1MB of total size 5MB and published on every 1MB to the subscriber.Data is getting published successfully .Now 'm facing the issue on appending the content to the existing file .Finally i could find only the last 1MB of data in the file.So please let me to know how to solve this issue ? and also i have attached the source code for publisher and subscriber.
Publisher:
public class MessageDataPublisher {
static StringBuffer fileContent;
static RandomAccessFile randomAccessFile ;
public static void main(String[] args) throws IOException {
MessageDataPublisher msgObj=new MessageDataPublisher();
String fileToWrite="test.txt";
msgObj.towriteDDS(fileToWrite);
}
public void towriteDDS(String fileName) throws IOException{
DDSEntityManager mgr=new DDSEntityManager();
String partitionName="PARTICIPANT";
// create Domain Participant
mgr.createParticipant(partitionName);
// create Type
BinaryFileTypeSupport binary=new BinaryFileTypeSupport();
mgr.registerType(binary);
// create Topic
mgr.createTopic("Serials");
// create Publisher
mgr.createPublisher();
// create DataWriter
mgr.createWriter();
// Publish Events
DataWriter dwriter = mgr.getWriter();
BinaryFileDataWriter binaryWriter=BinaryFileDataWriterHelper.narrow(dwriter);
int bufferSize=1024*1024;
File readfile=new File(fileName);
FileInputStream is = new FileInputStream(readfile);
byte[] totalbytes = new byte[is.available()];
is.read(totalbytes);
byte[] readbyte = new byte[bufferSize];
BinaryFile binaryInstance;
int k=0;
for(int i=0;i<totalbytes.length;i++){
readbyte[k]=totalbytes[i];
k++;
if(k>(bufferSize-1)){
binaryInstance=new BinaryFile();
binaryInstance.name="sendpublisher.txt";
binaryInstance.contents=readbyte;
int status = binaryWriter.write(binaryInstance, HANDLE_NIL.value);
ErrorHandler.checkStatus(status, "MsgDataWriter.write");
ErrorHandler.checkStatus(status, "MsgDataWriter.write");
k=0;
}
}
if(k < (bufferSize-1)){
byte[] remaingbyte = new byte[k];
for(int j=0;j<(k-1);j++){
remaingbyte[j]=readbyte[j];
}
binaryInstance=new BinaryFile();
binaryInstance.name="sendpublisher.txt";
binaryInstance.contents=remaingbyte;
int status = binaryWriter.write(binaryInstance, HANDLE_NIL.value);
ErrorHandler.checkStatus(status, "MsgDataWriter.write");
}
is.close();
try {
Thread.sleep(4000);
} catch (InterruptedException e) {
e.printStackTrace();
}
// clean up
mgr.getPublisher().delete_datawriter(binaryWriter);
mgr.deletePublisher();
mgr.deleteTopic();
mgr.deleteParticipant();
}
}
Subscriber:
public class MessageDataSubscriber {
static RandomAccessFile randomAccessFile ;
public static void main(String[] args) throws IOException {
DDSEntityManager mgr = new DDSEntityManager();
String partitionName = "PARTICIPANT";
// create Domain Participant
mgr.createParticipant(partitionName);
// create Type
BinaryFileTypeSupport msgTS = new BinaryFileTypeSupport();
mgr.registerType(msgTS);
// create Topic
mgr.createTopic("Serials");
// create Subscriber
mgr.createSubscriber();
// create DataReader
mgr.createReader();
// Read Events
DataReader dreader = mgr.getReader();
BinaryFileDataReader binaryReader=BinaryFileDataReaderHelper.narrow(dreader);
BinaryFileSeqHolder binaryseq=new BinaryFileSeqHolder();
SampleInfoSeqHolder infoSeq = new SampleInfoSeqHolder();
boolean terminate = false;
int count = 0;
while (!terminate && count < 1500) {
// To run undefinitely
binaryReader.take(binaryseq, infoSeq, 10,
ANY_SAMPLE_STATE.value, ANY_VIEW_STATE.value,ANY_INSTANCE_STATE.value);
for (int i = 0; i < binaryseq.value.length; i++) {
toWrtieXML(binaryseq.value[i].contents);
terminate = true;
}
try
{
Thread.sleep(200);
}
catch(InterruptedException ie)
{
}
++count;
}
binaryReader.return_loan(binaryseq,infoSeq);
// clean up
mgr.getSubscriber().delete_datareader(binaryReader);
mgr.deleteSubscriber();
mgr.deleteTopic();
mgr.deleteParticipant();
}
private static void toWrtieXML(byte[] bytes) throws IOException {
// TODO Auto-generated method stub
File Writefile=new File("samplesubscriber.txt");
if(!Writefile.exists()){
randomAccessFile = new RandomAccessFile(Writefile, "rw");
randomAccessFile.write(bytes, 0, bytes.length);
randomAccessFile.close();
}
else{
randomAccessFile = new RandomAccessFile(Writefile, "rw");
long i=Writefile.length();
randomAccessFile.seek(i);
randomAccessFile.write(bytes, 0, bytes.length);
randomAccessFile.close();
}
}
}
Thanks in advance
It is hard to give a conclusive answer to your question, because your issue could be the result of several different causes. Also, once the cause of the problem has been identified, you will probably have multiple options to mitigate it.
The first place to look is at the reader side. The code does a take() in a loop with a 200 millisecond pause between each take. Depending on your QoS settings on the DataReader, you might be facing a situation where your samples get overwritten in the DataReader while your application is sleeping for 200 milliseconds. If you are doing this over a gigabit ethernet, then a typical DDS product would be able to do those 5 chunks of 1 megabyte within that sleep period, meaning that your default, one-place buffer will get overwritten 4 times during your sleep.
This scenario would be likely if you used the default history QoS settings for your BinaryFileDataReader, which means history.kind = KEEP_LAST and history.depth = 1. Increasing the latter to a larger value, for example to 20, would result in a queue capable of holding 20 chunks of your file while you are sleeping. That should be sufficient for now.
If this does not resolve your issue, other possible causes can be explored.