I have a callback that gets incoming audio data as FloatBuffer containing 1024 floats that gets called several times per second. But I need an AudioInputStream since my system only works with them.
Converting the floats into 16bit PCM isgned audio data is not a problem, but I cannot create an InputStream out of it. The AudioInputStream constructor only accepts data with known length, but I have a constant stream. The AudioSystem.getAudioInputStream throws an "java.io.IOException: mark/reset not supported" if I feed it with a PipedInputStream containing the audio data.
Any ideas?
Here's my current code:
Jack jack = Jack.getInstance();
JackClient client = jack.openClient("Test", EnumSet.noneOf(JackOptions.class), EnumSet.noneOf(JackStatus.class));
JackPort in = client.registerPort("in", JackPortType.AUDIO, EnumSet.of(JackPortFlags.JackPortIsInput));
PipedInputStream pin = new PipedInputStream(1024 * 1024 * 1024);
PipedOutputStream pout = new PipedOutputStream(pin);
client.setProcessCallback(new JackProcessCallback() {
public boolean process(JackClient client, int nframes) {
FloatBuffer inData = in.getFloatBuffer();
byte[] buffer = new byte[inData.capacity() * 2];
for (int i = 0; i < inData.capacity(); i++) {
int sample = Math.round(inData.get(i) * 32767);
buffer[i * 2] = (byte) sample;
buffer[i * 2 + 1] = (byte) (sample >> 8);
}
try {
pout.write(buffer, 0, buffer.length);
} catch (IOException e) {
e.printStackTrace();
}
return true;
}
});
client.activate();
client.transportStart();
Thread.sleep(10000);
client.transportStop();
client.close();
AudioInputStream audio = AudioSystem.getAudioInputStream(new BufferedInputStream(pin, 1024 * 1024 * 1024));
AudioSystem.write(audio, Type.WAVE, new File("test.wav"));
It uses the JnaJack library, but it doesn't really matter where the data comes from. The conversion to bytes is fine by the way: writing that data directly to a SourceDataLine will work correctly. But I need the data as
AudioInputStream.
AudioSystem.getAudioInputStream expects a stream which conforms to a supported AudioFileFormat, which means it must conform to a known type. From the documentation:
The stream must point to valid audio file data.
And also from that documentation:
The implementation of this method may require multiple parsers to examine the stream to determine whether they support it. These parsers must be able to mark the stream, read enough data to determine whether they support the stream, and reset the stream's read pointer to its original position. If the input stream does not support these operation, this method may fail with an IOException.
You can create your own AudioInputStream using the three-argument constructor. If the length is not known, it can specified as AudioSystem.NOT_SPECIFIED. Frustratingly, neither the constructor documentation nor the class documentation mentions this, but the other constructor’s documentation does.
Related
Problem: Wav file loads and is processed by AudioDispatcher, but no sound plays.
First, the permissions:
public void checkPermissions() {
if (PackageManager.PERMISSION_GRANTED != ContextCompat.checkSelfPermission(this.requireContext(), Manifest.permission.RECORD_AUDIO)) {
//When permission is not granted by user, show them message why this permission is needed.
if (ActivityCompat.shouldShowRequestPermissionRationale(this.requireActivity(), Manifest.permission.RECORD_AUDIO)) {
Toast.makeText(this.getContext(), "Please grant permissions to record audio", Toast.LENGTH_LONG).show();
//Give user option to still opt-in the permissions
}
ActivityCompat.requestPermissions(this.requireActivity(), new String[]{Manifest.permission.RECORD_AUDIO}, MY_PERMISSIONS_RECORD_AUDIO);
launchProfile();
}
//If permission is granted, then proceed
else if (ContextCompat.checkSelfPermission(this.requireContext(), Manifest.permission.RECORD_AUDIO) == PackageManager.PERMISSION_GRANTED) {
launchProfile();
}
}
Then the launchProfile() function:
public void launchProfile() {
AudioMethods.test(getActivity().getApplicationContext());
//Other fragments load after this that actually do things with the audio file, but
//I want to get this working before anything else runs.
}
Then the AudioMethods.test function:
public static void test(Context context){
String fileName = "audio-samples/samplefile.wav";
try{
releaseStaticDispatcher(dispatcher);
TarsosDSPAudioFormat tarsosDSPAudioFormat = new TarsosDSPAudioFormat(TarsosDSPAudioFormat.Encoding.PCM_SIGNED,
22050,
16, //based on the screenshot from Audacity, should this be 32?
1,
2,
22050,
ByteOrder.BIG_ENDIAN.equals(ByteOrder.nativeOrder()));
AssetManager assetManager = context.getAssets();
AssetFileDescriptor fileDescriptor = assetManager.openFd(fileName);
InputStream stream = fileDescriptor.createInputStream();
dispatcher = new AudioDispatcher(new UniversalAudioInputStream(stream, tarsosDSPAudioFormat),1024,512);
//Not playing sound for some reason...
final AudioProcessor playerProcessor = new AndroidAudioPlayer(tarsosDSPAudioFormat, 22050, AudioManager.STREAM_MUSIC);
dispatcher.addAudioProcessor(playerProcessor);
dispatcher.run();
Thread audioThread = new Thread(dispatcher, "Test Audio Thread");
audioThread.start();
} catch (Exception e) {
e.printStackTrace();
}
}
Console output. No errors, just the warning:
W/AudioTrack: Use of stream types is deprecated for operations other than volume control
See the documentation of AudioTrack() for what to use instead with android.media.AudioAttributes to qualify your playback use case
D/AudioTrack: stop(38): called with 12288 frames delivered
Because the AudioTrack is delivering frames, and there aren't any runtime errors, I'm assuming I'm just missing something dumb by either not having sufficient permissions or I've missed something in setting up my AndroidAudioPlayer. I got the 22050 number by opening the file in Audacity and looking at the stats there:
Any help is appreciated! Thanks :)
Okay, I figured this out.
I'll address my questions as the appeared originally:
TarsosDSPAudioFormat tarsosDSPAudioFormat = new TarsosDSPAudioFormat(TarsosDSPAudioFormat.Encoding.PCM_SIGNED,
22050,
16, //based on the screenshot from Audacity, should this be 32?
1,
2,
22050,
ByteOrder.BIG_ENDIAN.equals(ByteOrder.nativeOrder()));
ANS: No. Per the following TarsosDSP AndroidAudioPlayer header (copied below), I'm limited to 16:
/**
* Constructs a new AndroidAudioPlayer from an audio format, default buffer size and stream type.
*
* #param audioFormat The audio format of the stream that this AndroidAudioPlayer will process.
* This can only be 1 channel, PCM 16 bit.
* #param bufferSizeInSamples The requested buffer size in samples.
* #param streamType The type of audio stream that the internal AudioTrack should use. For
* example, {#link AudioManager#STREAM_MUSIC}.
* #throws IllegalArgumentException if audioFormat is not valid or if the requested buffer size is invalid.
* #see AudioTrack
*/
The following modifications needed to be made to the test() method (this worked for me):
public static void test(Context context){
String fileName = "audio-samples/samplefile.wav";
try{
releaseStaticDispatcher(dispatcher);
TarsosDSPAudioFormat tarsosDSPAudioFormat = new TarsosDSPAudioFormat(TarsosDSPAudioFormat.Encoding.PCM_SIGNED,
22050,
16,
1,
2,
22050,
ByteOrder.BIG_ENDIAN.equals(ByteOrder.nativeOrder()));
AssetManager assetManager = context.getAssets();
AssetFileDescriptor fileDescriptor = assetManager.openFd(fileName);
FileInputStream stream = fileDescriptor.createInputStream();
dispatcher = new AudioDispatcher(new UniversalAudioInputStream(stream, tarsosDSPAudioFormat),2048,1024); //2048 corresponds to the buffer size in samples, 1024 is the buffer overlap and should just be half of the 'buffer size in samples' number (so...1024)
AudioProcessor playerProcessor = new customAudioPlayer(tarsosDSPAudioFormat, 2048); //again, 2048 is the buffer size in samples
dispatcher.addAudioProcessor(playerProcessor);
dispatcher.run();
Thread audioThread = new Thread(dispatcher, "Test Audio Thread");
audioThread.start();
} catch (Exception e) {
e.printStackTrace();
}
}
You'll notice I now create a 'customAudioPlayer', which is, in reality copy-pasted straight from TarsosDSP AndroidAudioPlayer with two small adjustments:
I hardcoded the stream type in the AudioAttributes .Builder() method so am no longer passing them in.
I'm using the AudioTrack.Builder() method because using stream types for playback was deprecated. Admittedly, I'm not sure if this was the change that fixed it, or if it was the change to the buffer size (or both?).
/*
* Constructs a new AndroidAudioPlayer from an audio format, default buffer size and stream type.
*
* #param audioFormat The audio format of the stream that this AndroidAudioPlayer will process.
* This can only be 1 channel, PCM 16 bit.
* #param bufferSizeInSamples The requested buffer size in samples.
* #throws IllegalArgumentException if audioFormat is not valid or if the requested buffer size is invalid.
* #see AudioTrack
*/
public customAudioPlayer(TarsosDSPAudioFormat audioFormat, int bufferSizeInSamples) {
if (audioFormat.getChannels() != 1) {
throw new IllegalArgumentException("TarsosDSP only supports mono audio channel count: " + audioFormat.getChannels());
}
// The requested sample rate
int sampleRate = (int) audioFormat.getSampleRate();
//The buffer size in bytes is twice the buffer size expressed in samples if 16bit samples are used:
int bufferSizeInBytes = bufferSizeInSamples * audioFormat.getSampleSizeInBits()/8;
// From the Android API about getMinBufferSize():
// The total size (in bytes) of the internal buffer where audio data is read from for playback.
// If track's creation mode is MODE_STREAM, you can write data into this buffer in chunks less than or equal to this size,
// and it is typical to use chunks of 1/2 of the total size to permit double-buffering. If the track's creation mode is MODE_STATIC,
// this is the maximum length sample, or audio clip, that can be played by this instance. See getMinBufferSize(int, int, int) to determine
// the minimum required buffer size for the successful creation of an AudioTrack instance in streaming mode. Using values smaller
// than getMinBufferSize() will result in an initialization failure.
int minBufferSizeInBytes = AudioTrack.getMinBufferSize(sampleRate, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT);
if(minBufferSizeInBytes > bufferSizeInBytes){
throw new IllegalArgumentException("The buffer size should be at least " + (minBufferSizeInBytes/(audioFormat.getSampleSizeInBits()/8)) + " (samples) according to AudioTrack.getMinBufferSize().");
}
//http://developer.android.com/reference/android/media/AudioTrack.html#AudioTrack(int, int, int, int, int, int)
//audioTrack = new AudioTrack(streamType, sampleRate, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSizeInBytes,AudioTrack.MODE_STREAM);
try {
audioTrack = new AudioTrack.Builder()
.setAudioAttributes(new AudioAttributes.Builder()
.setUsage(AudioAttributes.USAGE_MEDIA)
.setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
.build())
.setAudioFormat(new AudioFormat.Builder()
.setEncoding(AudioFormat.ENCODING_PCM_16BIT)
.setSampleRate(sampleRate)
.setChannelMask(AudioFormat.CHANNEL_OUT_MONO)
.build())
.setBufferSizeInBytes(bufferSizeInBytes)
.build();
audioTrack.play();
} catch (Exception e) {
e.printStackTrace();
}
}
Also, on my device I noticed that the volume control rocker switches just control the ringer volume by default. I had to open an audio menu (three little dots once the ringer volume was 'active') to turn up the media volume.
I have no idea how to do this. I have read the answers to several similar questions and some websites that probably had the answer somewhere, but either I could not understand them or they were not what I am trying to do. It is also possible that some did have the answer, but I could not focus well enough to interpret it. I want a method that converts the data from a WAV file signed 16-bit raw audio data and puts this into a short[]. I would prefer short minimalistic easy to understand answers because I would have less difficulty focusing on those.
Edit: Some have said this might be a duplicate of stackoverflow.com/questions/5210147/reading-wav-file-in-java. I do not understand that question or its answers well enough to even say whether it is different or why or how to change my question so it is not confused for that one.
Another edit: I have attempted using Phil Freihofner's answer, but when testing this by attempting to pay back the audio, I just heard a lot of clicks. I am not sure if I implemented it correctly. Here is the method that reads the file:
static void loadAudioDataTest(String filepath){
int totalFramesRead = 0;
File fileIn = new File(filepath);
try {
AudioInputStream audioInputStream =
AudioSystem.getAudioInputStream(fileIn);
int bytesPerFrame =
audioInputStream.getFormat().getFrameSize();
if (bytesPerFrame == AudioSystem.NOT_SPECIFIED) {
bytesPerFrame = 1;
}
int numBytes = 1024 * bytesPerFrame;
byte[] audioBytes = new byte[numBytes];
audioArray=new short[numBytes/2];
try{
int numBytesRead = 0;
int numFramesRead = 0;
while ((numBytesRead =
audioInputStream.read(audioBytes)) != -1) {
numFramesRead = numBytesRead / bytesPerFrame;
totalFramesRead += numFramesRead;
}for(int a=0;a<audioArray.length;a++){
audioArray[acc]=(short)((audioBytes[a*2]&0xff)|(audioBytes[acc*2+1]<<8));
}
} catch (Exception ex) {
// Handle the error...
}
} catch (Exception e) {
// Handle the error...
}
}
This bit plays the sound and is inside an actionPerformed(ActionEvent) void that is repeatedly activated by a timer, in case the issue is there
byte[]buf=new byte[2];
AudioFormat af=new AudioFormat(44100,16,1,true,false);
SourceDataLine sdl;
try{
sdl=AudioSystem.getSourceDataLine(af);
sdl.open();
sdl.start();
buf[1]=(byte) (audioArray[t%audioArray.length]&0xFF);
buf[0]=(byte) (audioArray[t%audioArray.length]>>8);
sdl.write(buf,0,2);
sdl.drain();
sdl.stop();
}catch(LineUnavailableException e1){
e1.printStackTrace();
}t++;
The current core java class commonly used for loading data into a byte array is AudioInputStream (javax.sound.sampled.AudioInputStream). An example of its use, with explanation, can be found in the Oracle tutorial Using Files and Format Converters. The sample code is in the section titled "Reading Sound Files". Note the point in the innermost while loop with the following line: // Here, do something useful with the audio data. At that point, you would load the data into your array.
Taking two bytes and converting them to a short has been answered several times but I don't have the links handy. It's easier to just post some code I have used.
audioArray[i] = ( buffer[bufferIdx] & 0xff )
| ( buffer[bufferIdx + 1] << 8 ) ;
... where audioArray could be a short[]. (In my code I use float[] and do another step to scale the values to range from -1 to 1.)
This is a slightly modified snipped from the library AudioCue on github, quoting from lines 391-393.
I am trying to create a small java program to cut an audio file down to a specified length. Currently I have the following code:-
import java.util.*;
import java.io.*;
import javax.sound.sampled.*;
public class cuttest_3{
public static void main(String[]args)
{
int totalFramesRead = 0;
File fileIn = new File("output1.wav");
// somePathName is a pre-existing string whose value was
// based on a user selection.
try {
AudioInputStream audioInputStream =
AudioSystem.getAudioInputStream(fileIn);
int bytesPerFrame =
audioInputStream.getFormat().getFrameSize();
if (bytesPerFrame == AudioSystem.NOT_SPECIFIED) {
// some audio formats may have unspecified frame size
// in that case we may read any amount of bytes
bytesPerFrame = 1;
}
// Set a buffer size of 5512 frames - semiquavers at 120bpm
int numBytes = 5512 * bytesPerFrame;
byte[] audioBytes = new byte[numBytes];
try {
int numBytesRead = 0;
int numFramesRead = 0;
// Try to read numBytes bytes from the file.
while ((numBytesRead =
audioInputStream.read(audioBytes)) != -1) {
// Calculate the number of frames actually read.
numFramesRead = numBytesRead / bytesPerFrame;
totalFramesRead += numFramesRead;
// Here, - output a trimmed audio file
AudioInputStream cutFile =
new AudioInputStream(audioBytes);
AudioSystem.write(cutFile,
AudioFileFormat.Type.WAVE,
new File("cut_output1.wav"));
}
} catch (Exception ex) {
// Handle the error...
}
} catch (Exception e) {
// Handle the error...
}
}
}
On attempting compilation, the following error is returned:-
cuttest_3.java:50: error: incompatible types: byte[] cannot be converted to TargetDataLine
new AudioInputStream(audioBytes);
I am not very familiar with AudioInputStream handling in Java, so can anyone suggest a way I can conform the data to achieve output? Many thanks
You have to tell the AudioInputStream how to decipher the bytes you pass in as is specified by Matt in the answer here. This documentation indicates what each of the parameters mean.
A stream of bytes does not mean anything until you indicate to the system playing the sound how many channels there are, the bit resolution per sample, samples per second, etc.
Since .wav files are an understood protocol and I think they have data at the front of the file defining various parameters of the audio track, the AudioInputStream can correctly decipher the 1st file you pass in.
I'm working on a project which download a file by using a http connection. I display a horizontal progress bar with the progress bar status during the downloading.
My function looks like this:
.......
try {
InputStream myInput = urlconnect.getInputStream();
BufferedInputStream buffinput = new BufferedInputStream(myInput);
ByteArrayBuffer baf = new ByteArrayBuffer(capacity);
int current = 0;
while((current = buffinput.read()) != -1) {
baf.append((byte) current);
}
File outputfile = new File(createRepertory(app, 0), Filename);
FileOutputStream myOutPut = new FileOutputStream(outputfile);
myOutPut.write(baf.toByteArray());
...
}
I know in advance the size of my file so I need to retrieve the size during the downloading (in my while block). Thus I'll be able to determinate the status of the progress bar.
progressBarStatus = ((int) downloadFileHttp(url, app) * 100)/sizefile;
long downloadFileHttp(.., ..) is the name of my function.
I already try to retrieve it by using outputfile.length but his value is "1" maybe it's the number of file that I'm trying to download.
Is there any way to figure it out?
UPDATE 1
I haven't got any thread which allow me to figure this out.
Currently I have got a horizontal progress bar whch displays only 0 and 100% whitout intermediate values. I
think about another approach. If I know the rate of my wifi and the
size of the file I can determinate the time of downloading.
I know that I can retrieve the piece of information of my Wifi
connection and the size of my file to download.
Is anybody already have worked or have thread on it?
I'll assume that you're using HttpURLConnection. In which case you need to call the getContentLength() method on urlconnect.
However, the server is not required to send a valid content length, so you should be prepared for it to be -1.
AsyncTask might be the perfect solution for you:
private class DownloadFileTask extends AsyncTask<URL, Integer, Long> {
protected Long doInBackground(URL... urls) {
Url url = urls[0];
//connect to url here
.......
try {
InputStream myInput = urlconnect.getInputStream();
BufferedInputStream buffinput = new BufferedInputStream(myInput);
ByteArrayBuffer baf = new ByteArrayBuffer(capacity);
int current = 0;
while((current = buffinput.read()) != -1) {
baf.append((byte) current);
//here you can send data to onProgressUpdate
publishProgress((int) (((float)baf.length()/ (float)sizefile) * 100));
}
File outputfile = new File(createRepertory(app, 0), Filename);
FileOutputStream myOutPut = new FileOutputStream(outputfile);
myOutPut.write(baf.toByteArray());
...
}
protected void onProgressUpdate(Integer... progress) {
//here you can set progress bar in UI thread
progressBarStatus = progress;
}
}
to start AsyncTask call here within your method
new DownloadFileTask().execute(url);
Simple. Below Code:
try {
URL url = new URL(yourLinkofFile);
URLConnection conn = url.openConnection();
conn.connect();
totalFileSize = conn.getContentLength();
} catch (Exception e) {
Log.e(TAG, "ERROR: " + e.toString());
}
Check the Content-Length header on the response. It should be set. All major HTTP servers use this header.
In HTTP 1.1 specs, the chunk response data is supposed to be pulled back across multiple rounds. Acutally, the content-length is -1 in chunk response, so we can't use availble method in Inputstream. BTW, Inputstream.availble method is only stable to get content length in ByteArrayInputStream.
If you just want to get the total length, you need to calc it by yourself in each read round. see the IOUtils class in apache commons-io project as below:
//-----------------------------------------------------------------------
/**
* Copy bytes from an <code>InputStream</code> to an
* <code>OutputStream</code>.
* <p>
* This method buffers the input internally, so there is no need to use a
* <code>BufferedInputStream</code>.
* <p>
* Large streams (over 2GB) will return a bytes copied value of
* <code>-1</code> after the copy has completed since the correct
* number of bytes cannot be returned as an int. For large streams
* use the <code>copyLarge(InputStream, OutputStream)</code> method.
*
* #param input the <code>InputStream</code> to read from
* #param output the <code>OutputStream</code> to write to
* #return the number of bytes copied
* #throws NullPointerException if the input or output is null
* #throws IOException if an I/O error occurs
* #throws ArithmeticException if the byte count is too large
* #since Commons IO 1.1
*/
public static int copy(InputStream input, OutputStream output) throws IOException {
long count = copyLarge(input, output);
if (count > Integer.MAX_VALUE) {
return -1;
}
return (int) count;
}
/**
* Copy bytes from a large (over 2GB) <code>InputStream</code> to an
* <code>OutputStream</code>.
* <p>
* This method buffers the input internally, so there is no need to use a
* <code>BufferedInputStream</code>.
*
* #param input the <code>InputStream</code> to read from
* #param output the <code>OutputStream</code> to write to
* #return the number of bytes copied
* #throws NullPointerException if the input or output is null
* #throws IOException if an I/O error occurs
* #since Commons IO 1.3
*/
public static long copyLarge(InputStream input, OutputStream output)
throws IOException {
byte[] buffer = new byte[DEFAULT_BUFFER_SIZE];
long count = 0;
int n = 0;
while (-1 != (n = input.read(buffer))) {
output.write(buffer, 0, n);
count += n;
}
return count;
}
If you want to check download progress, you need to callback on each read round from InputStream input to OutputStream output in the course of copping to disk. In the callback, you can copy a piece of data and add the amount to the counter which is designed to be available for your progressbar functionality. it is a little bit complex
It sounds like your main problem is not getting the length of the file or figuring out the actual value, but rather how to access the current value from another thread so that you can update the status bar appropriately.
You have a few approaches to solve this:
1.) In your progress bar item have a callback that will let you set the value and call that method each time you update your count in your download thread.
2.) Put the value in some field that is accessible to both threads (potentially not-thread-safe).
If it were me, in my progress bar item I would have a method that would allow updating the progress with some value. Then I would call that method from my thread that is downloading the file.
So basically User --> Clicks some download button --> Handler calls the method to start the download, passing a callback to the update progress bar method --> Downloading thread calls the method on each iterative cycle with the updated percentage complete.
I think you're making your life too complicated :)
First: since progressBarStatus = ((int) downloadFileHttp(url, app) * 100)/sizefile; is always either 0, either 100, maybe you're not computing the value correctly. You didn't post the whole method there, but don't forget you're dealing with int so sizefile is always int, and making divisions to a higher or equal to sizefile is always going to return 0 or 1. I suspect that is the direction you need to look into ...
Also, I don't see in your code where you're updating the progressbar after an intermediate byte read.
Second: I think it would be more efficient if you would read in chunks. The read is more efficient and you don't need to notify the UI thread for each downloaded byte. The answer from Adamski from this thread might help you out. Just use a smaller byte array. I am usually using 256 (3G) or 512 (Wi-Fi) - but maybe you don't need to go that much into detail. So once you got a new array read, count the total number of bytes read, notify the UI and continue reading until the end of stream.
Third: Set the progressBar.setMax() before downloading to sizeFile, compute the downloaded bytes number properly based on the comment from "First" and then call setProgress with that computed number. Just don't forget to update the progressbar on an UI thread. AsyncTask has a great mechanism to help you with that.
Good luck!
Well this should help you out
URLConnection connection = servletURL.openConnection();
BufferedInputStream buff = new BufferedInputStream(connection .getInputStream());
ObjectInputStream input = new ObjectInputStream(buff );
int avail = buff .available();
System.out.println("Response content size = " + avail);
In a Java program, what is the best way to read an audio file (WAV file) to an array of numbers (float[], short[], ...), and to write a WAV file from an array of numbers?
I read WAV files via an AudioInputStream. The following snippet from the Java Sound Tutorials works well.
int totalFramesRead = 0;
File fileIn = new File(somePathName);
// somePathName is a pre-existing string whose value was
// based on a user selection.
try {
AudioInputStream audioInputStream =
AudioSystem.getAudioInputStream(fileIn);
int bytesPerFrame =
audioInputStream.getFormat().getFrameSize();
if (bytesPerFrame == AudioSystem.NOT_SPECIFIED) {
// some audio formats may have unspecified frame size
// in that case we may read any amount of bytes
bytesPerFrame = 1;
}
// Set an arbitrary buffer size of 1024 frames.
int numBytes = 1024 * bytesPerFrame;
byte[] audioBytes = new byte[numBytes];
try {
int numBytesRead = 0;
int numFramesRead = 0;
// Try to read numBytes bytes from the file.
while ((numBytesRead =
audioInputStream.read(audioBytes)) != -1) {
// Calculate the number of frames actually read.
numFramesRead = numBytesRead / bytesPerFrame;
totalFramesRead += numFramesRead;
// Here, do something useful with the audio data that's
// now in the audioBytes array...
}
} catch (Exception ex) {
// Handle the error...
}
} catch (Exception e) {
// Handle the error...
}
To write a WAV, I found that quite tricky. On the surface it seems like a circular problem, the command that writes relies on an AudioInputStream as a parameter.
But how do you write bytes to an AudioInputStream? Shouldn't there be an AudioOutputStream?
What I found was that one can define an object that has access to the raw audio byte data to implement TargetDataLine.
This requires a lot of methods be implemented, but most can stay in dummy form as they are not required for writing data to a file. The key method to implement is read(byte[] buffer, int bufferoffset, int numberofbytestoread).
As this method will probably be called multiple times, there should also be an instance variable that indicates how far through the data one has progressed, and update that as part of the above read method.
When you have implemented this method, then your object can be used in to create a new AudioInputStream which in turn can be used with:
AudioSystem.write(yourAudioInputStream, AudioFileFormat.WAV, yourFileDestination)
As a reminder, an AudioInputStream can be created with a TargetDataLine as a source.
As to the direct manipulating the data, I have had good success acting on the data in the buffer in the innermost loop of the snippet example above, audioBytes.
While you are in that inner loop, you can convert the bytes to integers or floats and multiply a volume value (ranging from 0.0 to 1.0) and then convert them back to little endian bytes.
I believe since you have access to a series of samples in that buffer you can also engage various forms of DSP filtering algorithms at that stage. In my experience I have found that it is better to do volume changes directly on data in this buffer because then you can make the smallest possible increment: one delta per sample, minimizing the chance of clicks due to volume-induced discontinuities.
I find the "control lines" for volume provided by Java tend to situations where the jumps in volume will cause clicks, and I believe this is because the deltas are only implemented at the granularity of a single buffer read (often in the range of one change per 1024 samples) rather than dividing the change into smaller pieces and adding them one per sample. But I'm not privy to how the Volume Controls were implemented, so please take that conjecture with a grain of salt.
All and all, Java.Sound has been a real headache to figure out. I fault the Tutorial for not including an explicit example of writing a file directly from bytes. I fault the Tutorial for burying the best example of Play a File coding in the "How to Convert..." section. However, there's a LOT of valuable FREE info in that tutorial.
EDIT: 12/13/17
I've since used the following code to write audio from a PCM file in my own projects. Instead of implementing TargetDataLine one can extend InputStream and use that as a parameter to the AudioSystem.write method.
public class StereoPcmInputStream extends InputStream
{
private float[] dataFrames;
private int framesCounter;
private int cursor;
private int[] pcmOut = new int[2];
private int[] frameBytes = new int[4];
private int idx;
private int framesToRead;
public void setDataFrames(float[] dataFrames)
{
this.dataFrames = dataFrames;
framesToRead = dataFrames.length / 2;
}
#Override
public int read() throws IOException
{
while(available() > 0)
{
idx &= 3;
if (idx == 0) // set up next frame's worth of data
{
framesCounter++; // count elapsing frames
// scale to 16 bits
pcmOut[0] = (int)(dataFrames[cursor++] * Short.MAX_VALUE);
pcmOut[1] = (int)(dataFrames[cursor++] * Short.MAX_VALUE);
// output as unsigned bytes, in range [0..255]
frameBytes[0] = (char)pcmOut[0];
frameBytes[1] = (char)(pcmOut[0] >> 8);
frameBytes[2] = (char)pcmOut[1];
frameBytes[3] = (char)(pcmOut[1] >> 8);
}
return frameBytes[idx++];
}
return -1;
}
#Override
public int available()
{
// NOTE: not concurrency safe.
// 1st half of sum: there are 4 reads available per frame to be read
// 2nd half of sum: the # of bytes of the current frame that remain to be read
return 4 * ((framesToRead - 1) - framesCounter)
+ (4 - (idx % 4));
}
#Override
public void reset()
{
cursor = 0;
framesCounter = 0;
idx = 0;
}
#Override
public void close()
{
System.out.println(
"StereoPcmInputStream stopped after reading frames:"
+ framesCounter);
}
}
The source data to be exported here is in the form of stereo floats ranging from -1 to 1. The format of the resulting stream is 16-bit, stereo, little-endian.
I omitted skip and markSupported methods for my particular application. But it shouldn't be difficult to add them if they are needed.
This is the source code to write directly to a wav file.
You just need to know the mathematics and sound engineering to produce the sound you want.
In this example the equation calculates a binaural beat.
import javax.sound.sampled.AudioFileFormat;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
import java.io.ByteArrayInputStream;
import java.io.File;
import java.io.IOException;
public class Program {
public static void main(String[] args) throws IOException {
final double sampleRate = 44100.0;
final double frequency = 440;
final double frequency2 = 90;
final double amplitude = 1.0;
final double seconds = 2.0;
final double twoPiF = 2 * Math.PI * frequency;
final double piF = Math.PI * frequency2;
float[] buffer = new float[(int)(seconds * sampleRate)];
for (int sample = 0; sample < buffer.length; sample++) {
double time = sample / sampleRate;
buffer[sample] = (float)(amplitude * Math.cos(piF * time) * Math.sin(twoPiF * time));
}
final byte[] byteBuffer = new byte[buffer.length * 2];
int bufferIndex = 0;
for (int i = 0; i < byteBuffer.length; i++) {
final int x = (int)(buffer[bufferIndex++] * 32767.0);
byteBuffer[i++] = (byte)x;
byteBuffer[i] = (byte)(x >>> 8);
}
File out = new File("out10.wav");
final boolean bigEndian = false;
final boolean signed = true;
final int bits = 16;
final int channels = 1;
AudioFormat format = new AudioFormat((float)sampleRate, bits, channels, signed, bigEndian);
ByteArrayInputStream bais = new ByteArrayInputStream(byteBuffer);
AudioInputStream audioInputStream = new AudioInputStream(bais, format, buffer.length);
AudioSystem.write(audioInputStream, AudioFileFormat.Type.WAVE, out);
audioInputStream.close();
}
}
Some more detail on what you'd like to achieve would be helpful. If raw WAV data is okay for you, simply use a FileInputStream and probably a Scanner to turn it into numbers. But let me try to give you some meaningful sample code to get you started:
There is a class called com.sun.media.sound.WaveFileWriter for this purpose.
InputStream in = ...;
OutputStream out = ...;
AudioInputStream in = AudioSystem.getAudioInputStream(in);
WaveFileWriter writer = new WaveFileWriter();
writer.write(in, AudioFileFormat.Type.WAVE, outStream);
You could implement your own AudioInputStream that does whatever voodoo to turn your number arrays into audio data.
writer.write(new VoodooAudioInputStream(numbers), AudioFileFormat.Type.WAVE, outStream);
As #stacker mentioned, you should get yourself familiar with the API of course.
The javax.sound.sample package is not suitable for processing WAV files if you need to have access to the actual sample values. The package lets you change volume, sample rate, etc., but if you want other effects (say, adding an echo), you are on your own. (The Java tutorial hints that it should be possible to process the sample values directly, but the tech writer overpromised.)
This site has a simple class for processing WAV files: http://www.labbookpages.co.uk/audio/javaWavFiles.html
WAV File Specification
https://ccrma.stanford.edu/courses/422/projects/WaveFormat/
There is an API for your purpose
http://code.google.com/p/musicg/
First of all, you may need to know the headers and data positions of a WAVE structure, you can find the spec here.
Be aware that the data are little endian.
There's an API which may helps you to achieve your goal.
Wave files are supported by the javax.sound.sample package
Since isn't a trivial API you should read an article / tutorial which introduces the API like
Java Sound, An Introduction
If anyone still can find it required, there is an audio framework I'm working on that aimed to solve that and similar issues. Though it's on Kotlin. You can find it on GitHub: https://github.com/WaveBeans/wavebeans
It would look like this:
wave("file:///path/to/file.wav")
.map { it.asInt() } // here it as Sample type, need to convert it to desired type
.asSequence(44100.0f) // framework processes everything as sequence/stream
.toList() // read fully
.toTypedArray() // convert to array
And it's not dependent on Java Audio.
I use FileInputStream with some magic:
byte[] byteInput = new byte[(int)file.length() - 44];
short[] input = new short[(int)(byteInput.length / 2f)];
try{
FileInputStream fis = new FileInputStream(file);
fis.read(byteInput, 44, byteInput.length - 45);
ByteBuffer.wrap(byteInput).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(input);
}catch(Exception e ){
e.printStackTrace();
}
Your sample values are in short[] input!