Java Audio Byte Buffer takes varying times to fill - java

I am opening a targetdataline to accept audio input for a given format.
I start and open the line, and I have a buffer which fills with bytes. This runs on a constant loop until an external parameter is changed.
Now for a fixed sample rate and buffer size, I would expect this to always take the same amount of time to fill, ie if my buffer size was 48000 for an 8 bit stream, and my sample rate was 48kHz, I would expect my buffer to always take 1 second to fill. However I am finding this varying greatly.
The following is the code I have used:
DataLine.Info info1 = new DataLine.Info(TargetDataLine.class, format1);
try (TargetDataLine line = (TargetDataLine) m1.getLine(info1)) {
line.open(format1);
line.start();
while (!pauseInput){
long time1 = System.currentTimeMillis();
int numBytesRead1 = line.read(buffer1, 0, buffer1.length);
//chan1double = deinterleaveAudio(buffer1, chan1selectedchannel, chan1totalchannels);
long time2 = System.currentTimeMillis();
System.out.println(threadName + " Capture time = " + (time2-time1));
}
line.stop();
}
The commented line is a process I want to run each time the buffer is full. I realise I cannot place this here as it will interrupt the stream, so I need to find a different way to call this, hence I have commented out.
For testing purposes I have a buffer size of 4096. My audio format is 48kHz 16-bit, so I would expect my byte buffer to be filled in 42.6ms. ((1/48000) * 2048). (this is multiplied by half the buffer size as each sample is two bytes). However using the currentTimeMillies to measure each pass it is coming back with 123ms and 250ms and varying between those times.
Is there something I am missing out here that I have not done?
EDIT: I have copied just the code into a brand new application that doesn't even have a GUI or anything attached to it. Purely to output to the console and see what is happening, making sure there are no background threads to interfere, and sure enough the same happens. 95% of the time the buffer with predicted fill time of 250ms fills within 255-259ms. However occasionally this will drop to 127ms (which is physically impossible unless there is some weird buffer thing going on. Is this a bug in java somewhere?

I don't think it is a good idea to adjust timing such a way. It depends on many things e.g., bufferSize, mixer, etc. Moreover, your application is sharing the line's buffer with the mixer. If you have a real-time processing, store your data in a circular buffer with a length that is good enough to hold the amount of data that you need. In another thread, read the desired amount of data from the circular buffer, and do your processing at a constant time interval. Thus, sometimes, you may overlap or miss some bytes between two consecutive processings, but you always have the expected amount of bytes.
When you open the line, you can specify the line's buffer size by using open(format, bufferSize) or you can check actual buffer size by
calling DataLine.getBufferSize(). Then you need to specify the size of your short buffer that you are providing when you retrieve data through TargetDataLine.read(). Your short buffer size has to be smaller than the line's buffer size. I would consider short buffer size as 1/4th, 1/8th, 1/16th or so of the line's buffer size. Another idea is checking the available bytes DataLine.available() before calling read(). Note that read() is a blocking call (but it doesn't block line's buffer), i.e., it will be stuck until the requested amount of bytes have been read.
For low latency direct communication between your application and audio interface, you may consider ASIO.

For anyone looking at the same issue, I have been given an answer which half explains what is happening.
The thread scheduler decides when the code can run, and this can cause this to vary by 10-20ms. In the earlier days this was as much as 70ms.
This does not mean the stream is missing samples, but just that this buffer will not provide a continuous stream. So any application look at processing this data in realtime and passing it to be written to an audio output stream needs to be aware of this extra potential latency.
I am still looking at the reason for the short buffer fill time, every four or five passes. I was told it could be to do with the targetDataLine buffer size being different to my buffer size and just the remainder of that buffer being written on that pass, however I have changed this to be exactly the same and still no luck.

Related

Consistent popping sound while playing audio through a SourceDataLine

I'm writing a basic synth at the moment and have run into a bit of a strange problem. I get a constant popping sound while playing an array of bytes, representing 16 bit mono audio, through a SourceDataLine.
The pops play at a constant rate, and from what I can hear, pitch. The pops do slightly differ in frequencies though (again, from what I can hear), some notes have low-passed sounding pops, and others sound high-passed. The pops are not overriding though, you can still hear the desired sound in the background.
Nothing changes the rate of the pops, not note pitch, not the SourceDataLine buffer size, not the number of bytes I write to it at a time, except sample rate.
Lowering the sample rate decreases the rate of the pops and vice-versa.
To test my side of the program, I printed out the data being written to the SourceDataLine for about half a second and looked through around 15 cycles of the played sine wave, and it was completely fine; no sudden jumps, clipping, or anything else.
The only two things I use the value of the sample rate for is some basic math to help my sampler sample at the correct frequency, which is only calculated once for each note, and is definitely working as pitch is perfect, and for creating the SourceDataLine.
Here's how I'm starting the SourceDataLine (Taken from multiple parts of the main method):
AudioFormat format = new AudioFormat(AudioEnvironment.SAMPLE_RATE, AudioEnvironment.BIT_DEPTH, 1, true, true);
SourceDataLine line = AudioSystem.getSourceDataLine(format);
line.open(format, 8000);
line.start();
My data is correctly in big-endian, tested by me changing the endian flag in the constructor and getting my ears blasted with white noise.
After the program has set everything up, it constantly writes data to the SourceDataLine in this infinite loop:
while (true) {
for (Channel channel : channelSystem.getChannels()) {
if (channel.pitch != 0) {
wave.sample(channel, buffer);
line.write(buffer, 0, AudioEnvironment.SUB_BUFFER_SIZE * 2);
}
}
}
(A Channel is a class I created that contains all the data for a single note (Though obviously the program is not set up correctly for polyphony at the moment), buffer is an array of bytes, wave.sample() is where I sample my data into buffer, and AudioEnvironment.SUB_BUFFER_SIZE * 2 is the size of buffer)
I don't necessarily need an example of how to fix this in code, but an explanation of why this might be happening would be great.
EDIT: Something I should also probably add is that I've tried putting a print statement in the infinite write loop to print out the number of available bytes in the SourceDataLine, and it stays constantly around 500 - 2000, occasionally getting up to around 5000, but never near 8000, so the buffer is never running out of data.
Well as it turns out, the problem was completely unrelated to what I thought it might be.
Turns out there was a single equation I had written in my sampler that was just blatantly wrong.
After 2048 samples had been played, I would just kinda loop back to the beginning of the waveform, causing the popping.
I honestly have no idea why I wrote that in, but hey, it works now.

Setting java buffer to read from write position - offset

I have created a circular byte buffer in java based on a few existing resources and tutorials. I am using java sound's linear buffer, and writing into a large 5 second circular buffer.
For one of my functions, I would like to read from the circular buffer a few ms before the current write position, not from the start point of the circular buffer. How do I achieve this?
Currently I read from buffer using:
double[] readFromBuffer = new double[halfWindowSize];
input.circ.read(readFromBuffer, 0, halfWindowSize, true);
I understand that where the 0 is, is the offset, but not sure how this is used. It appears I want to query the current write position of the buffer, and then update the read position with a negative offset of x bytes (however many equates to a few ms). This needs to be done outside of my loop, as I then continue reading from the buffer in a loop.
Currently when I start reading this starts at the beginning of the buffer, which when graphing can result in up to 5 second delay on input too output, which is too great and makes the software appear laggy.
Any help would be appreciated, thanks.

How to keep read a large file with dynamic buffer size - depending on data read from file.

I have a file containing data that is meaningful only in chunks of certain size which is appended at the start of each chunk, for e.g.
{chunk_1_size}
{chunk_1}
{chunk_2_size}
{chunk_2}
{chunk_3_size}
{chunk_3}
{chunk_4_size}
{chunk_4}
{chunk_5_size}
{chunk_5}
.
.
{chunk_n_size}
{chunk_n}
The file is really really big ~ 2GB and the chunk size is ~20MB (which is the buffer that I want to have)
I would like to Buffer read this file to reduce the number to calls to actual hard disk.
But I am not sure how much buffer to have because the chunk size may vary.
pseudo code of what I have in mind:
while(!EOF) {
/*chunk is an integer i.e. 4 bytes*/
readChunkSize();
/*according to chunk size read the number of bytes from file*/
readChunk(chunkSize);
}
If lets say I have random buffer size then I might crawl into situations like:
First Buffer contains chunkSize_1 + chunk_1 + partialChunk_2 --- I have to keep track of leftover and then from the next buffer get the remaning chunk and concatenate to leftover to complete the chunk
First Buffer contains chunkSize_1 + chunk_1 + partialChunkSize_2 (chunk size is an integer i.e. 4 bytes so lets say I get only two of those from first buffer) --- I have to keep track of partialChunkSize_2 and then get remaning bytes from the next buffer to form an integer that actually gives me the next chunkSize
Buffer might not even be able to get one whole chunk at a time -- I have to keep hitting read until the first chunk is completely read into memory
You don't have much control over the number of calls to the hard disk. There are several layers between you and the hard disk (OS, driver, hardware buffering) that you cannot control.
Set a reasonable buffer size in your Java code (1M) and forget about it unless and until you can prove there is a performance issue that is directly related to buffer sizes. In other words, do not fall into the trap of premature optimization.
See also https://stackoverflow.com/a/385529/18157
you might need to do some analysis and have an idea of average buffer size, to read data.
you are saying to keep buffer-size and read data till the chunk is done ,to have some meaning full data
R u copying the file to some place else, or you sending this data to another place?
for some activities Java NIO packages have better implementations to deal with ,rather than reading data into jvm buffers.
the buffer size should be decent enough to read maximum chunks of data ,
If planning to hold data in memmory reading the data using buffers and holding them in memory will be still memory-cost operation ,buffers can be freed in many ways using basic flush operaitons.
please also check apache file-utils to read/write data

ByteArrayOutputStream Out of memory error while recording in java sound

I encountered a very strange problem. I am writing portion of the code below.
try {
while (!stopCapture) {
// Read data from the internal buffer of the data line.
int cnt = this.recLine.read(tempBuffer, 0, tempBuffer.length);
if (cnt > 0) {
// Save data in output stream object.
byteArrayOutputStream.write(tempBuffer, 0, cnt);
// System.out.println(" bytes " + tempBuffer[0]);
}// end if
}// ends while
// AudioSystem.write(myAIS, targetType, outputFile);
byteToWave(byteArrayOutputStream);
byteArrayOutputStream.close();
} catch (IOException e) {
// TODO provide runtime exception to reach it to presentation layer
e.printStackTrace();
} catch (Exception ex) {
ex.printStackTrace();
}
recLine is the TargetDataLine from which I am recording sound in to byteArrayOutputStream. it works normally until 40 - 48 seconds very well in my test , but when it reaches 49 seconds every time it throws an exception below :
Exception in thread "Thread-5" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2786)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)
at com.actura.app.capture.ActuraRecorder.run(ActuraRecorder.java:109)
at java.lang.Thread.run(Thread.java:619)
I am using this technique because I need the recorded bytes and from those bytes I am drawing a wave form succesfully on the UI.
During test I come to know that an exception raised when the size of the byteArrayOutputStream is only 7.3 mb.
Can I write this byteArrayOutputStream to random access file and then reset this byteArrayOutputStream everytime it reaches to its limit?
How do I know in advance the limits of the byteArrayOutputStream?
I checked with Integer.MAX_VALUE but as I said it just raised an exception at 7.3 mb so I can not reach to Integer.MAX_VALUE.
This applet runs on the internet, so setting memory size will not help me. How can I set it to my client's computer?
Your problem is, that the JVM is running out of memory because you keep all the data in memory. Extending the heap space helps temporarily until you need to work with even longer audios.
One idea might be to exchange the ByteArrayOutputStream by an FileOutputStream to a temporary File. This wouldn't require all audio data to be kept in memory. An even nicer solution is Guavas FileBackedOutputStream which stores the data only in a temporary file if it surpasses a certain threshold.
Another solution would be to update the waveform directly which each buffer. By creating a method which just receives the buffer with the audio data in it and applying it to the current state of the waveform. This way you wouldn't need to store the audio data at all.
This problem occurs because a Hotspot JVM's heap has a fixed upper bound on its size. The default limit depends on your platform, but it could be as low as 40Mb.
What you are doing is reading large amounts of data from your audio stream and buffering them in memory. Given that the heap has an upper limit, your application is eventually going to run into that limit, and the result will be an OutOfMemoryError exception.
The obvious solution is to increase the maximum heap space for the JVM using the -Xmx option. (Refer to the java command manual entry for more information on this option.) However there is an ultimate limit to how big you can make the JVMs heap ... depending on your hardware, the OS and whether or not you are using a 64 bit JVM.
You could also save the audio data to a file or attempt to compress it or reduce it, but these will all make the graphing more complicated.
Others have discussed the source of the problem.
To solve it, either:
Draw the bytes a little at a time, or..
Store one single BufferedImage, grab the bytes in small chunks, and draw them immediately to the image.
The first technique might result in something like seen in this You Tube Video
You can try to increase JVM heap space or try to compress your sound data to consume less memory.
It will definitely fill your memory and its behavior will be different on different computers.
What you can do is use some FileOutputStream and write data onto it and once your loop ends pull data from there and convert it to wav.
Also write your data in small packets of bytes (preferably 4096).. ensure your length of tempBuffer doesn't increases more than 4096.
Hope this helps..

Android Audio - Streaming sine-tone generator odd behaviour

first time poster here. I usually like to find the answer myself (be it through research or trial-and-error), but I'm stumped here.
What I'm trying to do:
I'm building a simple android audio synthesizer. Right now, I'm just playing a sine-tone in real time, with a slider in the UI that changes the tone's frequency as the user adjusts it.
How I've built it:
Basically, I have two threads - a worker thread and an output thread. The worker thread simply fills a buffer with the sine wave data every time its tick() method is called. Once the buffer is filled, it alerts the output thread that the data is ready to be written to the audio track. The reason I am using two threads is because audiotrack.write() blocks, and I want the worker thread to be able to begin processing its data as soon as possible (rather than waiting for the audio track to finish writing). The slider on the UI simply changes a variable in the worker thread, so that any changes to the frequency (via the slider) will be read by the worker thread's tick() method.
What works:
Almost everything; The threads communicate well, there don't seem to be any gaps or clicks in the playback. Despite the large buffer size (thanks android), the responsiveness is OK. The frequency variable does change, as do the intermediate values used during the buffer calculations in the tick() method (verified by Log.i()).
What doesn't work:
For some reason, I can't seem to get a continuous change in audible frequency. When I adjust the slider, the frequency changes in steps, often as wide as fourths or fifths. Theoretically, I should be hearing changes as minute as 1Hz, but I'm not. Oddly enough, it seems as if changes to the slider is causing the sine wave to play through intervals in the harmonic series; However, I can verify that the frequency variable is NOT snapping to integral multiples of the default frequency.
My Audio track is set up as such:
_buffSize = AudioTrack.getMinBufferSize(sampleRate, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT);
_audioTrackOut = new AudioTrack(AudioManager.STREAM_MUSIC, _sampleRate, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT, _buffSize, AudioTrack.MODE_STREAM);
The worker thread's buffer is being populated (via tick()) as such:
public short[] tick()
{
short[] outBuff = new short[_outBuffSize/2]; // (buffer size in Bytes) / 2
for (int i = 0; i < _outBuffSize/2; i++)
{
outBuff[i] = (short) (Short.MAX_VALUE * ((float) Math.sin(_currentAngle)));
//Update angleIncrement, as the frequency may have changed by now
_angleIncrement = (float) (2.0f * Math.PI) * _freq / _sampleRate;
_currentAngle = _currentAngle + _angleIncrement;
}
return outBuff;
}
The audio data is being written like this:
_audioTrackOut.write(fromWorker, 0, fromWorker.length);
Any help would be greatly appreciated. How can I get more gradual changes in frequency? I'm pretty confident that my logic in tick() is sound, as Log.i() verifies that the variables angleIncrement and currentAngle are being updated properly.
Thank you!
Update:
I found a similar problem here: Android AudioTrack buffering problems
The solution proposed that one must be able to produce samples fast enough for the audioTrack, which makes sense. I lowered my sample rate to 22050Hz, and ran some empirical tests - I can fill my buffer (via tick()) in approximately 6ms in the worst case. This is more than adequate. At 22050Hz, the audioTrack gives me a buffer size of 2048 samples (or 4096 Bytes). So, each filled buffer lasts for ~0.0928 seconds of audio, which is much longer than it takes to create the data (1~6 ms). SO, I know that I don't have any problems producing samples fast enough.
I should also note that for about the first 3 seconds of the applications lifecycle, it works fine - a smooth sweep of the slider produces a smooth sweep in the audio output. After this, it starts to get really choppy (sound only changes about every 100Mhz), and after that, it stops responding to slider input at all.
I also fixed one bug, but I don't think it has an effect. AudioTrack.getMinBufferSize() returns the smallest allowable buffer size in BYTES, and I was using this number as the length of the buffer in tick() - I now use half this number (2 Bytes per sample).
I've found it!
It turns out the problem has nothing to do with buffers or threading.
It sounds fine in the first couple of seconds, because the angle of the computation is relatively small. As the program runs and the angle grows, Math.sin(_currentAngle) begins to produce unreliable values.
So, I replaced Math.sin() with FloatMath.sin().
I also replaced
_currentAngle = _currentAngle + _angleIncrement;
with
_currentAngle = ((_currentAngle + _angleIncrement) % (2.0f * (float) Math.PI));, so the angle is always < 2*PI.
Works like a charm! Thanks very much for your help, praetorian droid!

Categories