I have some code like this:
byte tempBuffer[] = new byte[10000];
//call sleep thread (how long specified by second parameter)
//After sleep time is up it sets stopCapture to true
AudioSleepThread ast = new AudioSleepThread(this, seconds);
ast.start();
while(!this.stopCapture) {
//this method blocks
int cnt = targetDataLine.read(tempBuffer, 0, tempBuffer.length);
System.out.println(cnt);
if (cnt>0) {
// Subsequent requests must **only** contain the audio data.
RequestThread reqt = new RequestThread(responseObserver, requestObserver, tempBuffer);
reqt.start();
//Add it to array list
this.reqtArray.add(reqt);
}
}
I have a tempBuffer, in which I store 10000 bytes at a time. Each time I have 10000 bytes worth of audio, I send it along a request thread to process this chunk of audio. My problem is that I keep sending the same buffer with the same audio to every single one of my request threads.
In my mind, what is supposed to happen is that targetDataLine will read the audio 10000 bytes at a time and pass each of tempBuffers containing different parts of my audio to each of my request threads.
Perhaps I have misunderstood TargetDataLine.
You are only creating tempBuffer once, outside of your loop. Each call to targetDataLine.read is overwriting the contents of the buffer with the new data. Unless you are copying the buffer in the RequestThread constructor this will cause problems. You should proably create a new buffer for each read:
while(!this.stopCapture) {
byte tempBuffer[] = new byte[10000];
//this method blocks
int cnt = targetDataLine.read(tempBuffer, 0, tempBuffer.length);
You must also take notice of the number of bytes read returned by the read (your cnt variable). The read does not guarantee to fill the buffer.
Related
i am making a java program that reads data from a binary stream (using a DataInputStream).
Sometimes during this process i need to read a data chunk, however the method (which i cannot modify) that reads it will stop before reaching the end of the chunk (it is the normal behavior, apparently it just doesn't need the last bytes, but i can't do anything about the fact that they are there). This is not a problem in itself because i know exactly how long the chunk is, i.e. i know how many bytes there are in the chunk so i can skip bytes (with the skipBytes(int) method) until the end of the chunk ; the problem is : i don't actually know how many bytes the method actually read (or left), so i don't know how many bytes i need to skip to reach the end of the chunk.
Is there any way to :
know how many bytes were read in a stream since a certain point in time ?
know how many bytes were read in a stream since it was ?
any other way i could know how many bytes my data-chunk-reading method just read (since it won't directly tell me) ?
Just in case, i made a small diagram
Thanks in advance
ImageInputStream can do what you want. It implements DataInput and it has most of the methods of InputStream. And it has getStreamPosition, seek and skipBytes methods.
However, as you correctly point out, ImageIO.read(ImageInputStream) would close the stream, preventing you from reading more than one image.
The solution is to avoid using ImageIO.read, and instead obtain an ImageReader explicitly, using ImageIO.getImageReaders. Then you can invoke an ImageReader’s read method, which does not close the stream.
Here’s how I implemented it:
public void readImages(InputStream source,
Consumer<? super BufferedImage> imageHandler)
throws IOException {
// Every image is at a byte index which is a multiple of this number.
int boundary = 5000;
try (ImageInputStream stream = ImageIO.createImageInputStream(source)) {
while (true) {
long pos = stream.getStreamPosition();
Iterator<ImageReader> readers = ImageIO.getImageReaders(stream);
if (!readers.hasNext()) {
break;
}
ImageReader reader = readers.next();
reader.setInput(stream);
BufferedImage image = reader.read(0);
imageHandler.accept(image);
pos = stream.getStreamPosition();
long bytesToSkip = boundary - (pos % boundary);
if (bytesToSkip < boundary) {
stream.skipBytes(bytesToSkip);
}
}
}
}
And here’s how I tested it:
try (InputStream source = new BufferedInputStream(
Files.newInputStream(Path.of(filename)))) {
reader.readImages(source, img -> EventQueue.invokeLater(() -> {
JOptionPane.showMessageDialog(null, new ImageIcon(img));
}));
}
All the buffered read methods return the actual number of bytes read.
Quoting documentation for InputStream#read(byte[] b):
Returns:
the total number of bytes read into the buffer, or -1 if there is no more data because the end of the stream has been reached.
I'm trying to receive a file in byte[], and I'm using:
byte[] buffer = new byte[16384]; // How many bytes to read each run
InputStream in = socket.getInputStream(); // Get the data (bytes)
while((count = in.read(buffer)) > 0) { // While there is more data, keep running
fos.write(buffer); // Write the data to the file
times++; // Get the amount of times the loop ran
System.out.println("Times: " + times);
}
System.out.println("Loop ended");
The loop stops after 1293 times and then stops printing the times. But the code did not move to System.out.println("Loop ended"); - it seems like the loop is waiting for something...
Why the loop doesn't break?
Your loop terminates only at the end of the input stream. Has the sender terminated the stream (closed the socket)? If not, then there is no end yet.
In such a case, read() will pend until there is at least one byte.
If the socket cannot be closed at the end of the file, for some reason, then you will need to find another way for the recipient to know when to exit the loop. A usual method is to first send the number of bytes that will be sent.
Your write-to-file is faulty as well, since it will attempt to write the entire buffer. But the read can return a partial buffer; that's why it returns a count. The returned count needs to be used in the write to the output file.
I am trying to read from an InputStream. I wrote below code
byte[] bytes = new byte[1024 * 32];
while (bufferedInStream.read(bytes) != -1) {
bufferedOutStream.write(bytes);
}
What I don't understand is how many bytes I should read in an iteration? The stream contains a file saved on the disk.
I read here but I did not understand the post really.
Say you had a flow of water from a pipe into a bath. You then used a bucket to get water from the bath and carry to say to your garden to water the lawn. The bath is the buffer. When you are walking across the lawn the buffer is filling up so when you return there is a bucket ful for you to take again.
If the bath is tiny then it could overflow while you are walking with the bucket and so you will lose water. If you have a massive bath then it is unlikely to overflow. so a larger buffer is more convenient. but of course a larger bath costs more money and takes up more space.
A buffer in your program takes up memory space. And you don't want to take up all your available memory for your buffer just because it is convenient.
Generally in your read function you can specify how many bytes to read. so even if you have a small buffer you could do this (pseudocode):
const int bufsize = 50;
buf[bufsize];
unsigned read;
while ((read = is.read(buf, bufsize)) != NULL) {
// do something with data - up to read bytes
}
In above code bufzise is MAXIMUM data to read into the buffer.
If your read function does not allow you to specify a maximum number of bytes to read then you need to supply a buffer large enough to receive the largest possible read amount.
So the optimal buffer size is application specific. Only the application developer will know the characteristics of the data. Eg how fast is the flow of water into the bath. What bath size can you afford (embedded apps), how quickly can you carry bucket from bath across garden and back again.
It is depend on available memory, size of file and other stuff. You better make some measurement.
PS: You code is wrong. bufferedInStream.read(bytes) may not fill all buffer, but only part of it. This method return actual amount of bytes as result.
byte[] bytes = new byte[1024 * 32];
int size;
while ((size = bufferedInStream.read(bytes)) != -1) {
bufferedOutStream.write(bytes, 0, size);
}
Here is my suggestion (assuming we are dealing with just input stream and not how we gonna write to output stream):
If your use case does not have any requirement for high read performance, go ahead with FileInputStream. For example:
FileInputStream fileInputStream = new FileInputStream("filePath");
byte[] bytes = new byte[1024];
int size;
while ((size = fileInputStream.read(bytes)) != -1) {
outputStream.write(bytes, 0, size);
}
For better read performance, use BufferedInputStream and stick to its default buffer size and read single byte at a time. For example:
byte[] bytes = new byte[1];
BufferedInputStream bufferedInputStream =
new BufferedInputStream(fileInputStream("filePath"))
int size;
while ((size = bufferedInputStream.read(bytes)) != -1) {
outputStream.write(bytes, 0, size);
}
For more performance, try tuning the buffer size of BufferedInputStream and read one byte at a time. For example:
byte[] bytes = new byte[1];
BufferedInputStream bufferedInputStream =
new BufferedInputStream(fileInputStream("filePath"), 16048)
int size;
while ((size = bufferedInputStream.read(bytes)) != -1) {
outputStream.write(bytes, 0, size);
}
If you require even more, use buffer on top of BufferedInputStream. For example:
byte[] bytes = new byte[1024];
BufferedInputStream bufferedInputStream =
new BufferedInputStream(fileInputStream("filePath"), 16048)
int size;
while ((size = bufferedInputStream.read(bytes)) != -1) {
outputStream.write(bytes, 0, size);
}
You basically have a byte container of the length you specified (1024*32)
Then, the inputStream will fill as much as possible, probably the full container, iteration throughout iteration until it reaches the end of the file when it will fill only the remaining bytes, and return -1 the next iteration (the one it cant read anything)
So you are basically copy&pasting from input to output in chunks of 1024*32 bytes size
Hope it helps you understand the code
By the way, the last iteration, if the input stream has less than 1024*32, the output will receive not only the last part of the file but also a repetition of the previous iteration contents for the bytes not filled it the last iteration.
The idea is not to read the entire file contents at one time using the buffered input stream. You use the buffered input stream to read as many bytes as the bytes[] array size. You consume the bytes read and then move on to reading more bytes from the file. Hence you don't need know the file size in order to read it.
This post will be more helpful as it explains why you should wrap a fileinputstream with a buffered input stream
Why is using BufferedInputStream to read a file byte by byte faster than using FileInputStream?
I'm running a multithreaded minimalistic http(s) server (not a web server though) that accepts connections on three server sockets: local, internet and internet-ssl.
Each socket has a so timeout of 1000ms (might be lowered in the future).
The worker threads read requests like this:
byte[] reqBuffer = new byte[512];
theSocket.getInputStream().read(reqBuffer);
The problem now is that with the newly implemented ssl socket the problem with the 1/n-1 record splitting technique arises. Also some clients split in other strange ways when using ssl (4/n-4 etc.) so I thought I might just perform multiple reads like this:
byte[] reqBuffer = new byte[512];
InputStream is = theSocket.getInputStream();
int read = is.read(reqBuffer, 0, 128); // inital read - with x/n-x this is very small
int pos = 0;
if (read > 0) {
pos = read;
}
int i = 0;
do {
read = is.read(reqBuffer, pos, 128);
if (read > 0) {
pos += read;
}
i++;
} while(read == 128 && i < 3); // max. 3 more reads (4 total = 512 bytes) or until less than 128 bytes are read (request should be completely read)
Which works with browsers like firefox or chrome and other clients using that technique.
Now my problem is that the new method is much slower. Requests to the local socket are so slow that a script with 2 seconds timeout times out requesting (I have no idea why). Maybe I have some logical problem in my code?
Is there a better way to read from a SSL socket? Because there are up to hundreds or even a thousand requests per second and the new read method slows down even the http requests.
Note: The ssl-socket is not in use at the moment and will not be used until I can fix this problem.
I have also tried reading line for line using a buffered reader since we are talking about http here but the server exploded running out of file descriptors (limit is 20 000). Might have been because of my implementation, though.
I'm thankful for every suggestion regarding this problem. If you need more information about the code just tell me and I will post them asap.
EDIT:
I actually put a little bit more thought into what I am trying to do and I realized that it comes down to reading HTTP headers. So the best solution would be to actually read the request line for line (or character for character) and stop reading after x lines or until an empty line (marking the end of the header) is reached.
My current approach would be to put a BufferedInputStream around the socket's InputStream and read it with an InputStreamReader which is "read" by a BufferedReader (question: does it make sense to use a BufferedInputStream when I'm using a BufferedReader?).
This BufferedReader reads the request character for character, detects end-of-lines (\r\n) and continues to read until either a line longer than 64 characters is reached, a maximum of 8 lines are read or an empty line is reached (marking the end of the HTTP header). I will test my implementation tomorrow and edit this edit accordingly.
EDIT:
I almost forgot to write my results here: It works. On every socket, even faster than the previously working way. Thanks everyone for pointing me in the right direction. I ended up implementing it like this:
List<String> requestLines = new ArrayList<String>(6);
InputStream is = this.cSocket.getInputStream();
bis = new BufferedInputStream(is, 1024);
InputStreamReader isr = new InputStreamReader(bis, Config.REQUEST_ENCODING);
BufferedReader br = new BufferedReader(isr);
/* read input character for character
* maximum line size is 768 characters
* maximum number of lines is 6
* lines are defined as char sequences ending with \r\n
* read lines are added to a list
* reading stops at the first empty line => HTTP header end
*/
int readChar; // the last read character
int characterCount = 0; // the character count in the line that is currently being read
int lineCount = 0; // the overall line count
char[] charBuffer = new char[768]; // create a character buffer with space for 768 characters (max line size)
// read as long as the stream is not closed / EOF, the character count in the current line is below 768 and the number of lines read is below 6
while((readChar = br.read()) != -1 && characterCount < 768 && lineCount < 6) {
charBuffer[characterCount] = (char) readChar; // fill the char buffer with the read character
if (readChar == '\n' && characterCount > 0 && charBuffer[characterCount-1] == '\r') { // if end of line is detected (\r\n)
if (characterCount == 1) { // if empty line
break; // stop reading after an empty line (HTTP header ended)
}
requestLines.add(new String(charBuffer, 0, characterCount-1)); // add the read line to the readLines list (and leave out the \r)
// charBuffer = new char[768]; // clear the buffer - not required
characterCount = 0; // reset character count for next line
lineCount++; // increase read line count
} else {
characterCount++; // if not end of line, increase read character count
}
}
This is most likely slower as you are waiting for the other end to send more data, possibly data it is never going to send.
A better approach is you give it a larger buffer like 32KB (128 is small) and only read once the data which is available. If this data needs to be re-assembled in the messages of some sort, you shouldn't be using timeouts or a fixed number of loops as read() is only guaranteed to return one byte at least.
You should certainly wrap a BufferedInputStream around the SSLSocket's input stream.
Your technique of reading 128 bytes at a time and advancing the offset is completely pointless. Just read as much as you can at a time and deal with it. Or one byte at a time from the buffered stream.
Similarly you should certainly wrap the SSLSocket's output stream in a BufferedOutputStream.
This very well may just be a KISS moment, but I feel like I should ask anyway.
I have a thread and it's reading from a sockets InputStream. Since I am dealing in particularly small data sizes (as in the data that I can expect to recieve from is in the order of 100 - 200 bytes), I set the buffer array size to 256. As part of my read function I have a check that will ensure that when I read from the InputStream that I got all of the data. If I didn't then I will recursively call the read function again. For each recursive call I merge the two buffer arrays back together.
My problem is, while I never anticipate using more than the buffer of 256, I want to be safe. But if sheep begin to fly and the buffer is significantly more the read the function (by estimation) will begin to take an exponential curve more time to complete.
How can I increase the effiency of the read function and/or the buffer merging?
Here is the read function as it stands.
int BUFFER_AMOUNT = 256;
private int read(byte[] buffer) throws IOException {
int bytes = mInStream.read(buffer); // Read the input stream
if (bytes == -1) { // If bytes == -1 then we didn't get all of the data
byte[] newBuffer = new byte[BUFFER_AMOUNT]; // Try to get the rest
int newBytes;
newBytes = read(newBuffer); // Recurse until we have all the data
byte[] oldBuffer = new byte[bytes + newBytes]; // make the final array size
// Merge buffer into the begining of old buffer.
// We do this so that once the method finishes, we can just add the
// modified buffer to a queue later in the class for processing.
for (int i = 0; i < bytes; i++)
oldBuffer[i] = buffer[i];
for (int i = bytes; i < bytes + newBytes; i++) // Merge newBuffer into the latter half of old Buffer
oldBuffer[i] = newBuffer[i];
// Used for the recursion
buffer = oldBuffer; // And now we set buffer to the new buffer full of all the data.
return bytes + newBytes;
}
return bytes;
}
EDIT: Am I being paranoid (unjustifiedly) and should just set the buffer to 2048 and call it done?
BufferedInputStream, as noted by Roland, and DataInputStream.readFully(), which replaces all the looping code.
int BUFFER_AMOUNT = 256;
Should be final if you don't want it changing at runtime.
if (bytes == -1) {
Should be !=
Also, I'm not entirely clear on what you're trying to accomplish with this code. Do you mind shedding some light on that?
I have no idea what you mean by "small data sizes". You should measure whether the time is spent in kernel mode (then you are issuing too many reads directly on the socket) or in user mode (then your algorithm is too complicated).
In the former case, just wrap the input with a BufferedInputStream with 4096 bytes of buffer and read from it.
In the latter case, just use this code:
/**
* Reads as much as possible from the stream.
* #return The number of bytes read into the buffer, or -1
* if nothing has been read because the end of file has been reached.
*/
static int readGreedily(InputStream is, byte[] buf, int start, int len) {
int nread;
int ptr = start; // index at which the data is put into the buffer
int rest = len; // number of bytes that we still want to read
while ((nread = is.read(buf, ptr, rest)) > 0) {
ptr += nread;
rest -= nread;
}
int totalRead = len - rest;
return (nread == -1 && totalRead == 0) ? -1 : totalRead;
}
This code completely avoids creating new objects, calling unnecessary methods and furthermore --- it is straightforward.