I have created a circular byte buffer in java based on a few existing resources and tutorials. I am using java sound's linear buffer, and writing into a large 5 second circular buffer.
For one of my functions, I would like to read from the circular buffer a few ms before the current write position, not from the start point of the circular buffer. How do I achieve this?
Currently I read from buffer using:
double[] readFromBuffer = new double[halfWindowSize];
input.circ.read(readFromBuffer, 0, halfWindowSize, true);
I understand that where the 0 is, is the offset, but not sure how this is used. It appears I want to query the current write position of the buffer, and then update the read position with a negative offset of x bytes (however many equates to a few ms). This needs to be done outside of my loop, as I then continue reading from the buffer in a loop.
Currently when I start reading this starts at the beginning of the buffer, which when graphing can result in up to 5 second delay on input too output, which is too great and makes the software appear laggy.
Any help would be appreciated, thanks.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am trying to use threading on the reading of a bmp image in Java. More exactly, i want to read it in 4 chunks of data. This is for educational purposes only, i know it's not something you would technically need.
However, i don't know how to read the file byte by byte or chunk by chunk. The only thing i found was using readAllBytes which is not what i need, or readByte, which requires me having the array of bytes already, but this is not a threaded reading anymore.
Is there any way i could read byte by byte or block by block for a given path?
Thank you in advance!
.read(), with no arguments, reads exactly one byte, but two important notes on this:
This is not thread safe. Threading + disk generally doesn't work; the bottleneck is the disk, not the CPU, and you need to add a guard that only one thread is reading at any one time. Given that the disk is so slow, you'll end up in a situation that one thread needs to read from disk, does so, and processes the data so received, and while that is happening, the other X threads that all were waiting on disk now have one of them that can 'go' (the others still have to wait). But, each thread is done reading and processing data before any other thread even got unpaused: You gain nothing.
read() on a FileInputStream is usually incredibly slow. These are low level operations, but disks tend to read entire blocks at a time and are incapable of reading one byte at a time. Thus, read() is implemented as: Read the smallest chunk one can read (usually still 4096 or more bytes), take the one byte from that chunk that is needed, and just toss the remainder in the garbage can. In other words, if you read a file by calling .read() 4906 times, that reads the same chunk from disk 4096 times. Whereas calling:
byte[] b = new byte[4096];
int read = fileIn.read(b);
would fill the entire byte array with the chunk, and is thus 4096x faster.
If your aim is to learn about multithreading, 'reading a file' is not how to learn about it; you can't observe anything meaningful in action this way.
If your aim is to speed up BMP processing, 'multithread the reading process' is now the way either. I'm at a loss to explain why multithreading is involved here at all. It is suitable neither to learn about it, nor to speed anything up.
I am opening a targetdataline to accept audio input for a given format.
I start and open the line, and I have a buffer which fills with bytes. This runs on a constant loop until an external parameter is changed.
Now for a fixed sample rate and buffer size, I would expect this to always take the same amount of time to fill, ie if my buffer size was 48000 for an 8 bit stream, and my sample rate was 48kHz, I would expect my buffer to always take 1 second to fill. However I am finding this varying greatly.
The following is the code I have used:
DataLine.Info info1 = new DataLine.Info(TargetDataLine.class, format1);
try (TargetDataLine line = (TargetDataLine) m1.getLine(info1)) {
line.open(format1);
line.start();
while (!pauseInput){
long time1 = System.currentTimeMillis();
int numBytesRead1 = line.read(buffer1, 0, buffer1.length);
//chan1double = deinterleaveAudio(buffer1, chan1selectedchannel, chan1totalchannels);
long time2 = System.currentTimeMillis();
System.out.println(threadName + " Capture time = " + (time2-time1));
}
line.stop();
}
The commented line is a process I want to run each time the buffer is full. I realise I cannot place this here as it will interrupt the stream, so I need to find a different way to call this, hence I have commented out.
For testing purposes I have a buffer size of 4096. My audio format is 48kHz 16-bit, so I would expect my byte buffer to be filled in 42.6ms. ((1/48000) * 2048). (this is multiplied by half the buffer size as each sample is two bytes). However using the currentTimeMillies to measure each pass it is coming back with 123ms and 250ms and varying between those times.
Is there something I am missing out here that I have not done?
EDIT: I have copied just the code into a brand new application that doesn't even have a GUI or anything attached to it. Purely to output to the console and see what is happening, making sure there are no background threads to interfere, and sure enough the same happens. 95% of the time the buffer with predicted fill time of 250ms fills within 255-259ms. However occasionally this will drop to 127ms (which is physically impossible unless there is some weird buffer thing going on. Is this a bug in java somewhere?
I don't think it is a good idea to adjust timing such a way. It depends on many things e.g., bufferSize, mixer, etc. Moreover, your application is sharing the line's buffer with the mixer. If you have a real-time processing, store your data in a circular buffer with a length that is good enough to hold the amount of data that you need. In another thread, read the desired amount of data from the circular buffer, and do your processing at a constant time interval. Thus, sometimes, you may overlap or miss some bytes between two consecutive processings, but you always have the expected amount of bytes.
When you open the line, you can specify the line's buffer size by using open(format, bufferSize) or you can check actual buffer size by
calling DataLine.getBufferSize(). Then you need to specify the size of your short buffer that you are providing when you retrieve data through TargetDataLine.read(). Your short buffer size has to be smaller than the line's buffer size. I would consider short buffer size as 1/4th, 1/8th, 1/16th or so of the line's buffer size. Another idea is checking the available bytes DataLine.available() before calling read(). Note that read() is a blocking call (but it doesn't block line's buffer), i.e., it will be stuck until the requested amount of bytes have been read.
For low latency direct communication between your application and audio interface, you may consider ASIO.
For anyone looking at the same issue, I have been given an answer which half explains what is happening.
The thread scheduler decides when the code can run, and this can cause this to vary by 10-20ms. In the earlier days this was as much as 70ms.
This does not mean the stream is missing samples, but just that this buffer will not provide a continuous stream. So any application look at processing this data in realtime and passing it to be written to an audio output stream needs to be aware of this extra potential latency.
I am still looking at the reason for the short buffer fill time, every four or five passes. I was told it could be to do with the targetDataLine buffer size being different to my buffer size and just the remainder of that buffer being written on that pass, however I have changed this to be exactly the same and still no luck.
I have a file containing data that is meaningful only in chunks of certain size which is appended at the start of each chunk, for e.g.
{chunk_1_size}
{chunk_1}
{chunk_2_size}
{chunk_2}
{chunk_3_size}
{chunk_3}
{chunk_4_size}
{chunk_4}
{chunk_5_size}
{chunk_5}
.
.
{chunk_n_size}
{chunk_n}
The file is really really big ~ 2GB and the chunk size is ~20MB (which is the buffer that I want to have)
I would like to Buffer read this file to reduce the number to calls to actual hard disk.
But I am not sure how much buffer to have because the chunk size may vary.
pseudo code of what I have in mind:
while(!EOF) {
/*chunk is an integer i.e. 4 bytes*/
readChunkSize();
/*according to chunk size read the number of bytes from file*/
readChunk(chunkSize);
}
If lets say I have random buffer size then I might crawl into situations like:
First Buffer contains chunkSize_1 + chunk_1 + partialChunk_2 --- I have to keep track of leftover and then from the next buffer get the remaning chunk and concatenate to leftover to complete the chunk
First Buffer contains chunkSize_1 + chunk_1 + partialChunkSize_2 (chunk size is an integer i.e. 4 bytes so lets say I get only two of those from first buffer) --- I have to keep track of partialChunkSize_2 and then get remaning bytes from the next buffer to form an integer that actually gives me the next chunkSize
Buffer might not even be able to get one whole chunk at a time -- I have to keep hitting read until the first chunk is completely read into memory
You don't have much control over the number of calls to the hard disk. There are several layers between you and the hard disk (OS, driver, hardware buffering) that you cannot control.
Set a reasonable buffer size in your Java code (1M) and forget about it unless and until you can prove there is a performance issue that is directly related to buffer sizes. In other words, do not fall into the trap of premature optimization.
See also https://stackoverflow.com/a/385529/18157
you might need to do some analysis and have an idea of average buffer size, to read data.
you are saying to keep buffer-size and read data till the chunk is done ,to have some meaning full data
R u copying the file to some place else, or you sending this data to another place?
for some activities Java NIO packages have better implementations to deal with ,rather than reading data into jvm buffers.
the buffer size should be decent enough to read maximum chunks of data ,
If planning to hold data in memmory reading the data using buffers and holding them in memory will be still memory-cost operation ,buffers can be freed in many ways using basic flush operaitons.
please also check apache file-utils to read/write data
As we all know java allows us to use byte array as a buffer for data. My case here is with J2me
The scenario here is that I have two buffers of equal size and I need to swap them as they get full one by one ..
In detail
Two buffers buff1 and buff2
Reading data from Buff1 while writing other data to buff2
Then when buff2 gets full
They swap their position now reading from buff2 and writing to buff1
The above cycle goes on
So how do I detect when a buffer is full and is ready to be swapped?
so how do I detect when a buffer is full
The buffer itself is never full (or empty). It is just a fixed amount of reserved memory.
You need to keep track of the useful parts (i.e. those with meaningful data) yourself. Usually, this is just an integer that counts how much bytes were written into the buffer (starting from the beginning).
When that integer reaches the buffer length, your buffer is "full".
Three values can be used to specify the state of a buffer at any given moment in time: SOURCE LINK
* position
* limit
* capacity
Position
When you read from a channel, you put the data that you read into an underlying array. The position variable keeps track of how much data you have written. More precisely, it specifies into which array element the next byte will go. Thus, if you've read three bytes from a channel into a buffer, that buffer's position will be set to 3, referring to the fourth element of the array.
Limit
The limit variable specifies how much data there is left to get (in the case of writing from a buffer into a channel), or how much room there is left to put data into (in the case of reading from a channel into a buffer).
The position is always less than, or equal to, the limit.
Capacity
The capacity of a buffer specifies the maximum amount of data that can be stored therein. In effect, it specifies the size of the underlying array -- or, at least, the amount of the underlying array that we are permitted to use.
The limit can never be larger than the capacity.
so how do I detect when a buffer is full and is ready to be swapped?
Check the values of the position and the capacity fields or the limit .
When I'm drawing on a Canvas, I use the createBufferStrategy(2) method to create two buffers. However I've seen multiple times other people creating three buffers and understand that many more can be used.
I can understand the need for two buffers but I cannot understand the logic behind using more.
My question is - what is the gain of using more than one buffer and how does that effect performance compared to two buffers?
Thanks in advance.
With double buffering, the front buffer is being displayed and the back buffer is being drawn in. Once the drawing is finished, but before the buffers are flipped, neither buffer can be touched. This could lead to a wait period during which no drawing can be done.
Triple buffering is a way to side-step the wait. There are two back buffers: once the drawing in one back buffer is complete, it can immediately start in the other back buffer.
Wikipedia has more details.