Java - reading, manipulating and writing WAV files - java

In a Java program, what is the best way to read an audio file (WAV file) to an array of numbers (float[], short[], ...), and to write a WAV file from an array of numbers?

I read WAV files via an AudioInputStream. The following snippet from the Java Sound Tutorials works well.
int totalFramesRead = 0;
File fileIn = new File(somePathName);
// somePathName is a pre-existing string whose value was
// based on a user selection.
try {
AudioInputStream audioInputStream =
AudioSystem.getAudioInputStream(fileIn);
int bytesPerFrame =
audioInputStream.getFormat().getFrameSize();
if (bytesPerFrame == AudioSystem.NOT_SPECIFIED) {
// some audio formats may have unspecified frame size
// in that case we may read any amount of bytes
bytesPerFrame = 1;
}
// Set an arbitrary buffer size of 1024 frames.
int numBytes = 1024 * bytesPerFrame;
byte[] audioBytes = new byte[numBytes];
try {
int numBytesRead = 0;
int numFramesRead = 0;
// Try to read numBytes bytes from the file.
while ((numBytesRead =
audioInputStream.read(audioBytes)) != -1) {
// Calculate the number of frames actually read.
numFramesRead = numBytesRead / bytesPerFrame;
totalFramesRead += numFramesRead;
// Here, do something useful with the audio data that's
// now in the audioBytes array...
}
} catch (Exception ex) {
// Handle the error...
}
} catch (Exception e) {
// Handle the error...
}
To write a WAV, I found that quite tricky. On the surface it seems like a circular problem, the command that writes relies on an AudioInputStream as a parameter.
But how do you write bytes to an AudioInputStream? Shouldn't there be an AudioOutputStream?
What I found was that one can define an object that has access to the raw audio byte data to implement TargetDataLine.
This requires a lot of methods be implemented, but most can stay in dummy form as they are not required for writing data to a file. The key method to implement is read(byte[] buffer, int bufferoffset, int numberofbytestoread).
As this method will probably be called multiple times, there should also be an instance variable that indicates how far through the data one has progressed, and update that as part of the above read method.
When you have implemented this method, then your object can be used in to create a new AudioInputStream which in turn can be used with:
AudioSystem.write(yourAudioInputStream, AudioFileFormat.WAV, yourFileDestination)
As a reminder, an AudioInputStream can be created with a TargetDataLine as a source.
As to the direct manipulating the data, I have had good success acting on the data in the buffer in the innermost loop of the snippet example above, audioBytes.
While you are in that inner loop, you can convert the bytes to integers or floats and multiply a volume value (ranging from 0.0 to 1.0) and then convert them back to little endian bytes.
I believe since you have access to a series of samples in that buffer you can also engage various forms of DSP filtering algorithms at that stage. In my experience I have found that it is better to do volume changes directly on data in this buffer because then you can make the smallest possible increment: one delta per sample, minimizing the chance of clicks due to volume-induced discontinuities.
I find the "control lines" for volume provided by Java tend to situations where the jumps in volume will cause clicks, and I believe this is because the deltas are only implemented at the granularity of a single buffer read (often in the range of one change per 1024 samples) rather than dividing the change into smaller pieces and adding them one per sample. But I'm not privy to how the Volume Controls were implemented, so please take that conjecture with a grain of salt.
All and all, Java.Sound has been a real headache to figure out. I fault the Tutorial for not including an explicit example of writing a file directly from bytes. I fault the Tutorial for burying the best example of Play a File coding in the "How to Convert..." section. However, there's a LOT of valuable FREE info in that tutorial.
EDIT: 12/13/17
I've since used the following code to write audio from a PCM file in my own projects. Instead of implementing TargetDataLine one can extend InputStream and use that as a parameter to the AudioSystem.write method.
public class StereoPcmInputStream extends InputStream
{
private float[] dataFrames;
private int framesCounter;
private int cursor;
private int[] pcmOut = new int[2];
private int[] frameBytes = new int[4];
private int idx;
private int framesToRead;
public void setDataFrames(float[] dataFrames)
{
this.dataFrames = dataFrames;
framesToRead = dataFrames.length / 2;
}
#Override
public int read() throws IOException
{
while(available() > 0)
{
idx &= 3;
if (idx == 0) // set up next frame's worth of data
{
framesCounter++; // count elapsing frames
// scale to 16 bits
pcmOut[0] = (int)(dataFrames[cursor++] * Short.MAX_VALUE);
pcmOut[1] = (int)(dataFrames[cursor++] * Short.MAX_VALUE);
// output as unsigned bytes, in range [0..255]
frameBytes[0] = (char)pcmOut[0];
frameBytes[1] = (char)(pcmOut[0] >> 8);
frameBytes[2] = (char)pcmOut[1];
frameBytes[3] = (char)(pcmOut[1] >> 8);
}
return frameBytes[idx++];
}
return -1;
}
#Override
public int available()
{
// NOTE: not concurrency safe.
// 1st half of sum: there are 4 reads available per frame to be read
// 2nd half of sum: the # of bytes of the current frame that remain to be read
return 4 * ((framesToRead - 1) - framesCounter)
+ (4 - (idx % 4));
}
#Override
public void reset()
{
cursor = 0;
framesCounter = 0;
idx = 0;
}
#Override
public void close()
{
System.out.println(
"StereoPcmInputStream stopped after reading frames:"
+ framesCounter);
}
}
The source data to be exported here is in the form of stereo floats ranging from -1 to 1. The format of the resulting stream is 16-bit, stereo, little-endian.
I omitted skip and markSupported methods for my particular application. But it shouldn't be difficult to add them if they are needed.

This is the source code to write directly to a wav file.
You just need to know the mathematics and sound engineering to produce the sound you want.
In this example the equation calculates a binaural beat.
import javax.sound.sampled.AudioFileFormat;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
import java.io.ByteArrayInputStream;
import java.io.File;
import java.io.IOException;
public class Program {
public static void main(String[] args) throws IOException {
final double sampleRate = 44100.0;
final double frequency = 440;
final double frequency2 = 90;
final double amplitude = 1.0;
final double seconds = 2.0;
final double twoPiF = 2 * Math.PI * frequency;
final double piF = Math.PI * frequency2;
float[] buffer = new float[(int)(seconds * sampleRate)];
for (int sample = 0; sample < buffer.length; sample++) {
double time = sample / sampleRate;
buffer[sample] = (float)(amplitude * Math.cos(piF * time) * Math.sin(twoPiF * time));
}
final byte[] byteBuffer = new byte[buffer.length * 2];
int bufferIndex = 0;
for (int i = 0; i < byteBuffer.length; i++) {
final int x = (int)(buffer[bufferIndex++] * 32767.0);
byteBuffer[i++] = (byte)x;
byteBuffer[i] = (byte)(x >>> 8);
}
File out = new File("out10.wav");
final boolean bigEndian = false;
final boolean signed = true;
final int bits = 16;
final int channels = 1;
AudioFormat format = new AudioFormat((float)sampleRate, bits, channels, signed, bigEndian);
ByteArrayInputStream bais = new ByteArrayInputStream(byteBuffer);
AudioInputStream audioInputStream = new AudioInputStream(bais, format, buffer.length);
AudioSystem.write(audioInputStream, AudioFileFormat.Type.WAVE, out);
audioInputStream.close();
}
}

Some more detail on what you'd like to achieve would be helpful. If raw WAV data is okay for you, simply use a FileInputStream and probably a Scanner to turn it into numbers. But let me try to give you some meaningful sample code to get you started:
There is a class called com.sun.media.sound.WaveFileWriter for this purpose.
InputStream in = ...;
OutputStream out = ...;
AudioInputStream in = AudioSystem.getAudioInputStream(in);
WaveFileWriter writer = new WaveFileWriter();
writer.write(in, AudioFileFormat.Type.WAVE, outStream);
You could implement your own AudioInputStream that does whatever voodoo to turn your number arrays into audio data.
writer.write(new VoodooAudioInputStream(numbers), AudioFileFormat.Type.WAVE, outStream);
As #stacker mentioned, you should get yourself familiar with the API of course.

The javax.sound.sample package is not suitable for processing WAV files if you need to have access to the actual sample values. The package lets you change volume, sample rate, etc., but if you want other effects (say, adding an echo), you are on your own. (The Java tutorial hints that it should be possible to process the sample values directly, but the tech writer overpromised.)
This site has a simple class for processing WAV files: http://www.labbookpages.co.uk/audio/javaWavFiles.html

WAV File Specification
https://ccrma.stanford.edu/courses/422/projects/WaveFormat/
There is an API for your purpose
http://code.google.com/p/musicg/

First of all, you may need to know the headers and data positions of a WAVE structure, you can find the spec here.
Be aware that the data are little endian.
There's an API which may helps you to achieve your goal.

Wave files are supported by the javax.sound.sample package
Since isn't a trivial API you should read an article / tutorial which introduces the API like
Java Sound, An Introduction

If anyone still can find it required, there is an audio framework I'm working on that aimed to solve that and similar issues. Though it's on Kotlin. You can find it on GitHub: https://github.com/WaveBeans/wavebeans
It would look like this:
wave("file:///path/to/file.wav")
.map { it.asInt() } // here it as Sample type, need to convert it to desired type
.asSequence(44100.0f) // framework processes everything as sequence/stream
.toList() // read fully
.toTypedArray() // convert to array
And it's not dependent on Java Audio.

I use FileInputStream with some magic:
byte[] byteInput = new byte[(int)file.length() - 44];
short[] input = new short[(int)(byteInput.length / 2f)];
try{
FileInputStream fis = new FileInputStream(file);
fis.read(byteInput, 44, byteInput.length - 45);
ByteBuffer.wrap(byteInput).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(input);
}catch(Exception e ){
e.printStackTrace();
}
Your sample values are in short[] input!

Related

How do I convert data read from a WAV file to an array of signed 16-bit raw audio data in java?

I have no idea how to do this. I have read the answers to several similar questions and some websites that probably had the answer somewhere, but either I could not understand them or they were not what I am trying to do. It is also possible that some did have the answer, but I could not focus well enough to interpret it. I want a method that converts the data from a WAV file signed 16-bit raw audio data and puts this into a short[]. I would prefer short minimalistic easy to understand answers because I would have less difficulty focusing on those.
Edit: Some have said this might be a duplicate of stackoverflow.com/questions/5210147/reading-wav-file-in-java. I do not understand that question or its answers well enough to even say whether it is different or why or how to change my question so it is not confused for that one.
Another edit: I have attempted using Phil Freihofner's answer, but when testing this by attempting to pay back the audio, I just heard a lot of clicks. I am not sure if I implemented it correctly. Here is the method that reads the file:
static void loadAudioDataTest(String filepath){
int totalFramesRead = 0;
File fileIn = new File(filepath);
try {
AudioInputStream audioInputStream =
AudioSystem.getAudioInputStream(fileIn);
int bytesPerFrame =
audioInputStream.getFormat().getFrameSize();
if (bytesPerFrame == AudioSystem.NOT_SPECIFIED) {
bytesPerFrame = 1;
}
int numBytes = 1024 * bytesPerFrame;
byte[] audioBytes = new byte[numBytes];
audioArray=new short[numBytes/2];
try{
int numBytesRead = 0;
int numFramesRead = 0;
while ((numBytesRead =
audioInputStream.read(audioBytes)) != -1) {
numFramesRead = numBytesRead / bytesPerFrame;
totalFramesRead += numFramesRead;
}for(int a=0;a<audioArray.length;a++){
audioArray[acc]=(short)((audioBytes[a*2]&0xff)|(audioBytes[acc*2+1]<<8));
}
} catch (Exception ex) {
// Handle the error...
}
} catch (Exception e) {
// Handle the error...
}
}
This bit plays the sound and is inside an actionPerformed(ActionEvent) void that is repeatedly activated by a timer, in case the issue is there
byte[]buf=new byte[2];
AudioFormat af=new AudioFormat(44100,16,1,true,false);
SourceDataLine sdl;
try{
sdl=AudioSystem.getSourceDataLine(af);
sdl.open();
sdl.start();
buf[1]=(byte) (audioArray[t%audioArray.length]&0xFF);
buf[0]=(byte) (audioArray[t%audioArray.length]>>8);
sdl.write(buf,0,2);
sdl.drain();
sdl.stop();
}catch(LineUnavailableException e1){
e1.printStackTrace();
}t++;
The current core java class commonly used for loading data into a byte array is AudioInputStream (javax.sound.sampled.AudioInputStream). An example of its use, with explanation, can be found in the Oracle tutorial Using Files and Format Converters. The sample code is in the section titled "Reading Sound Files". Note the point in the innermost while loop with the following line: // Here, do something useful with the audio data. At that point, you would load the data into your array.
Taking two bytes and converting them to a short has been answered several times but I don't have the links handy. It's easier to just post some code I have used.
audioArray[i] = ( buffer[bufferIdx] & 0xff )
| ( buffer[bufferIdx + 1] << 8 ) ;
... where audioArray could be a short[]. (In my code I use float[] and do another step to scale the values to range from -1 to 1.)
This is a slightly modified snipped from the library AudioCue on github, quoting from lines 391-393.

Get AudioInputStream of FloatBuffer

I have a callback that gets incoming audio data as FloatBuffer containing 1024 floats that gets called several times per second. But I need an AudioInputStream since my system only works with them.
Converting the floats into 16bit PCM isgned audio data is not a problem, but I cannot create an InputStream out of it. The AudioInputStream constructor only accepts data with known length, but I have a constant stream. The AudioSystem.getAudioInputStream throws an "java.io.IOException: mark/reset not supported" if I feed it with a PipedInputStream containing the audio data.
Any ideas?
Here's my current code:
Jack jack = Jack.getInstance();
JackClient client = jack.openClient("Test", EnumSet.noneOf(JackOptions.class), EnumSet.noneOf(JackStatus.class));
JackPort in = client.registerPort("in", JackPortType.AUDIO, EnumSet.of(JackPortFlags.JackPortIsInput));
PipedInputStream pin = new PipedInputStream(1024 * 1024 * 1024);
PipedOutputStream pout = new PipedOutputStream(pin);
client.setProcessCallback(new JackProcessCallback() {
public boolean process(JackClient client, int nframes) {
FloatBuffer inData = in.getFloatBuffer();
byte[] buffer = new byte[inData.capacity() * 2];
for (int i = 0; i < inData.capacity(); i++) {
int sample = Math.round(inData.get(i) * 32767);
buffer[i * 2] = (byte) sample;
buffer[i * 2 + 1] = (byte) (sample >> 8);
}
try {
pout.write(buffer, 0, buffer.length);
} catch (IOException e) {
e.printStackTrace();
}
return true;
}
});
client.activate();
client.transportStart();
Thread.sleep(10000);
client.transportStop();
client.close();
AudioInputStream audio = AudioSystem.getAudioInputStream(new BufferedInputStream(pin, 1024 * 1024 * 1024));
AudioSystem.write(audio, Type.WAVE, new File("test.wav"));
It uses the JnaJack library, but it doesn't really matter where the data comes from. The conversion to bytes is fine by the way: writing that data directly to a SourceDataLine will work correctly. But I need the data as
AudioInputStream.
AudioSystem.getAudioInputStream expects a stream which conforms to a supported AudioFileFormat, which means it must conform to a known type. From the documentation:
The stream must point to valid audio file data.
And also from that documentation:
The implementation of this method may require multiple parsers to examine the stream to determine whether they support it. These parsers must be able to mark the stream, read enough data to determine whether they support the stream, and reset the stream's read pointer to its original position. If the input stream does not support these operation, this method may fail with an IOException.
You can create your own AudioInputStream using the three-argument constructor. If the length is not known, it can specified as AudioSystem.NOT_SPECIFIED. Frustratingly, neither the constructor documentation nor the class documentation mentions this, but the other constructor’s documentation does.

Java audio - trim an audio file down to a specified length

I am trying to create a small java program to cut an audio file down to a specified length. Currently I have the following code:-
import java.util.*;
import java.io.*;
import javax.sound.sampled.*;
public class cuttest_3{
public static void main(String[]args)
{
int totalFramesRead = 0;
File fileIn = new File("output1.wav");
// somePathName is a pre-existing string whose value was
// based on a user selection.
try {
AudioInputStream audioInputStream =
AudioSystem.getAudioInputStream(fileIn);
int bytesPerFrame =
audioInputStream.getFormat().getFrameSize();
if (bytesPerFrame == AudioSystem.NOT_SPECIFIED) {
// some audio formats may have unspecified frame size
// in that case we may read any amount of bytes
bytesPerFrame = 1;
}
// Set a buffer size of 5512 frames - semiquavers at 120bpm
int numBytes = 5512 * bytesPerFrame;
byte[] audioBytes = new byte[numBytes];
try {
int numBytesRead = 0;
int numFramesRead = 0;
// Try to read numBytes bytes from the file.
while ((numBytesRead =
audioInputStream.read(audioBytes)) != -1) {
// Calculate the number of frames actually read.
numFramesRead = numBytesRead / bytesPerFrame;
totalFramesRead += numFramesRead;
// Here, - output a trimmed audio file
AudioInputStream cutFile =
new AudioInputStream(audioBytes);
AudioSystem.write(cutFile,
AudioFileFormat.Type.WAVE,
new File("cut_output1.wav"));
}
} catch (Exception ex) {
// Handle the error...
}
} catch (Exception e) {
// Handle the error...
}
}
}
On attempting compilation, the following error is returned:-
cuttest_3.java:50: error: incompatible types: byte[] cannot be converted to TargetDataLine
new AudioInputStream(audioBytes);
I am not very familiar with AudioInputStream handling in Java, so can anyone suggest a way I can conform the data to achieve output? Many thanks
You have to tell the AudioInputStream how to decipher the bytes you pass in as is specified by Matt in the answer here. This documentation indicates what each of the parameters mean.
A stream of bytes does not mean anything until you indicate to the system playing the sound how many channels there are, the bit resolution per sample, samples per second, etc.
Since .wav files are an understood protocol and I think they have data at the front of the file defining various parameters of the audio track, the AudioInputStream can correctly decipher the 1st file you pass in.

MappedByteBuffer not releasing memory

I am having trouble using the NIO MappedByteBuffer function to read very large seismic files. The format my program reads is called SEGY and consists of seismic data samples as well as meta data regarding, among other items, the numeric ID and XY coordinates of the seismic data.
The structure of the format is fairly fixed with a 240 byte header followed by a fixed number of data samples making up each seismic trace. The number of samples per trace can vary from file to file but usually is around 1000 to 2000.
Samples can be written as single bytes, 16 or 32 bit integers, or either IBM or IEEE float. The data in each trace header can likewise be in any of the above formats. To further confuse the issue SEGY files can be in big or little endian byte order.
The files can range in size from 3600 bytes up to several terrabytes.
My application is a SEGY editor and viewer. For many of the functions it performs I must read only one or two variables, say long ints from each trace header.
At present I am reading from a RandomAccessFile into a byte buffer, then extracting the needed variables from a view buffer. This works but is painfully slow for very large files.
I have written a new file handler using a mapped byte buffer that breaks the file into 5000 trace MappedByteBuffers. This works well and is very fast until my system runs low on memory and then it slows to a crawl and I am forced to reboot just to make my Mac useable again.
For some reason the memory from the buffers is never released, even after my program is finished. I need to either do a purge or reboot.
This is my code. Any suggestions would be most appreciated.
package MyFileHandler;
import java.io.*;
import java.nio.*;
import java.nio.channels.FileChannel;
import java.util.ArrayList;
public class MyFileHandler
{
/*
A buffered file IO class that keeps NTRACES traces in memory for reading and writing.
the buffers start and end at trace boundaries and the buffers are sequential
i.e 1-20000,20001-40000, etc
The last, or perhaps only buffer will contain less than NTRACES up to the last trace
The arrays BufferOffsets and BufferLengths contain the start and length for all the
buffers required to read and write to the file
*/
private static int NTRACES = 5000;
private boolean HighByte;
private long FileSize;
private int BytesPerTrace;
private FileChannel FileChnl;
private MappedByteBuffer Buffer;
private long BufferOffset;
private int BufferLength;
private long[] BufferOffsets;
private int[] BufferLengths;
private RandomAccessFile Raf;
private int BufferIndex;
private ArrayList Maps;
public MyFileHandler(RandomAccessFile raf, int bpt)
{
try
{
HighByte = true;
// allocate a filechannel to the file
FileChnl = raf.getChannel();
FileSize = FileChnl.size();
BytesPerTrace = bpt;
SetUpBuffers();
BufferIndex = 0;
GetNewBuffer(0);
} catch (IOException ioe)
{
ioe.printStackTrace();
}
}
private void SetUpBuffers()
{
// get number of traces in entire file
int ntr = (int) ((FileSize - 3600) / BytesPerTrace);
int nbuffs = ntr / NTRACES;
// add one to nbuffs unless filesize is multiple of NTRACES
if (Math.IEEEremainder(ntr, NTRACES) != 0)
{
nbuffs++;
}
BufferOffsets = new long[nbuffs];
BufferLengths = new int[nbuffs];
// BuffOffset are in bytes, not trace numbers
//get the offsets and lengths of each buffer
for (int i = 0; i < nbuffs; i++)
{
if (i == 0)
{
// first buffer contains EBCDIC header 3200 bytes and binary header 400 bytes
BufferOffsets[i] = 0;
BufferLengths[i] = 3600 + (Math.min(ntr, NTRACES) * BytesPerTrace);
} else
{
BufferOffsets[i] = BufferOffsets[i - 1] + BufferLengths[i - 1];
BufferLengths[i] = (int) (Math.min(FileSize - BufferOffsets[i], NTRACES * BytesPerTrace));
}
}
GetMaps();
}
private void GetMaps()
{
// map the file to list of MappedByteBuffer
Maps = new ArrayList(BufferOffsets.length);
try
{
for(int i=0;i<BufferOffsets.length;i++)
{
MappedByteBuffer map = FileChnl.map(FileChannel.MapMode.READ_WRITE, BufferOffsets[i], BufferLengths[i]);
SetByteOrder(map);
Maps.add(map);
}
} catch (IOException ioe)
{
ioe.printStackTrace();
}
}
private void GetNewBuffer(long offset)
{
if (Buffer == null || offset < BufferOffset || offset >= BufferOffset + BufferLength)
{
BufferIndex = GetBufferIndex(offset);
BufferOffset = BufferOffsets[BufferIndex];
BufferLength = BufferLengths[BufferIndex];
Buffer = (MappedByteBuffer)Maps.get(BufferIndex);
}
}
private int GetBufferIndex(long offset)
{
int indx = 0;
for (int i = 0; i < BufferOffsets.length; i++)
{
if (offset >= BufferOffsets[i] && offset < BufferOffsets[i]+BufferLengths[i])
{
indx = i;
break;
}
}
return indx;
}
private void SetByteOrder(MappedByteBuffer ByteBuff)
{
if (HighByte)
{
ByteBuff.order(ByteOrder.BIG_ENDIAN);
} else
{
ByteBuff.order(ByteOrder.LITTLE_ENDIAN);
}
}
// public methods to read, (get) or write (put) an array of types, byte, short, int, or float.
// for sake of brevity only showing get and put for ints
public void Get(int[] buff, long offset)
{
GetNewBuffer(offset);
Buffer.position((int) (offset - BufferOffset));
Buffer.asIntBuffer().get(buff);
}
public void Put(int[] buff, long offset)
{
GetNewBuffer(offset);
Buffer.position((int) (offset - BufferOffset));
Buffer.asIntBuffer().put(buff);
}
public void HighByteOrder(boolean hb)
{
// all byte swapping is done by the buffer class
// set all allocated buffers to same byte order
HighByte = hb;
}
public int GetBuffSize()
{
return BufferLength;
}
public void Close()
{
try
{
FileChnl.close();
} catch (Exception e)
{
e.printStackTrace();
}
}
}
You are mapping the entire file into memory, via a possibly large number of MappedByteBuffers, and as you are keeping them in a Map they are never released. It is pointless. You may as well map the entire file with a single MappedByteBuffer, or the minimum number you need to overcome the address limitation. There is no benefit in using more of them than you need.
But I would only map the segment of the file that is currently being viewed/edited, and release it when the user moves to another segment.
I'm surprised that MappedByteBuffer is found to be so much faster. Last time I tested, reads via mapped byte buffers were only 20% faster than RandomAccessFile, and writes not at all. I'd like to see the RandomAccessFile code, as it seems there is probably something wrong with it that could easily be fixed.

Java SFXR Port - Trouble writing byte[] to WAV file

I'm using a Java port of the sound effect generator SFXR, which involves lots of arcane music code that I don't understand, being something of a novice when it comes to anything to do with audio. What I do know is that the code can reliably generate and play sounds within Java, using a SourceDataLine object.
The data that the SDL object uses is stored in a byte[]. However, simply writing this out to a file doesn't work (presumably because of the lack of a WAV header, or so I thought).
However, I downloaded this WAV read/write class: http://computermusicblog.com/blog/2008/08/29/reading-and-writing-wav-files-in-java/ which adds in header information when it writes a WAV file. Giving it the byte[] data from SFXR still produces files that can't be played by any music player I have.
I figure I must be missing something. Here's the relevant code when it plays the sound data:
public void play(int millis) throws Exception {
AudioFormat stereoFormat = getStereoAudioFormat();
SourceDataLine stereoSdl = AudioSystem.getSourceDataLine(stereoFormat);
if (!stereoSdl.isOpen()) {
try {
stereoSdl.open();
} catch (LineUnavailableException e) {
e.printStackTrace();
}
}
if (!stereoSdl.isRunning()) {
stereoSdl.start();
}
double seconds = millis / 1000.0;
int bufferSize = (int) (4 * 41000 * seconds);
byte[] target = new byte[bufferSize];
writeBytes(target);
stereoSdl.write(target, 0, target.length);
}
That's from the SFXR port. Here's the save() file from the WavIO class (there's a lot of other code in that class of course, I figured this might be worth posting in case someone wants to see exactly how the buffer data is being handled:
public boolean save()
{
try
{
DataOutputStream outFile = new DataOutputStream(new FileOutputStream(myPath));
// write the wav file per the wav file format
outFile.writeBytes("RIFF"); // 00 - RIFF
outFile.write(intToByteArray((int)myChunkSize), 0, 4); // 04 - how big is the rest of this file?
outFile.writeBytes("WAVE"); // 08 - WAVE
outFile.writeBytes("fmt "); // 12 - fmt
outFile.write(intToByteArray((int)mySubChunk1Size), 0, 4); // 16 - size of this chunk
outFile.write(shortToByteArray((short)myFormat), 0, 2); // 20 - what is the audio format? 1 for PCM = Pulse Code Modulation
outFile.write(shortToByteArray((short)myChannels), 0, 2); // 22 - mono or stereo? 1 or 2? (or 5 or ???)
outFile.write(intToByteArray((int)mySampleRate), 0, 4); // 24 - samples per second (numbers per second)
outFile.write(intToByteArray((int)myByteRate), 0, 4); // 28 - bytes per second
outFile.write(shortToByteArray((short)myBlockAlign), 0, 2); // 32 - # of bytes in one sample, for all channels
outFile.write(shortToByteArray((short)myBitsPerSample), 0, 2); // 34 - how many bits in a sample(number)? usually 16 or 24
outFile.writeBytes("data"); // 36 - data
outFile.write(intToByteArray((int)myDataSize), 0, 4); // 40 - how big is this data chunk
outFile.write(myData); // 44 - the actual data itself - just a long string of numbers
}
catch(Exception e)
{
System.out.println(e.getMessage());
return false;
}
return true;
}
All I know is, I've got a bunch of data, and I want it to end up in a playable audio file of some kind (at this point I'd take ANY format!). What's the best way for me to get this byte buffer into a playable file? Or is this byte[] not what I think it is?
I do not get much chance to play with the sound capabilities of Java so I'm using your question as a learning exercise (I hope you don't mind). The article that you referenced about Reading and Writing WAV Files in Java is very old in relation to Java history (1998). Also something about constructing the WAV header by hand didn't sit quite right with me (it seemed a little error prone). As Java is quite a mature language now I would expect library support for this kind of thing.
I was able to construct a WAV file from a byte array by hunting around the internet for sample code snippets. This is the code that I came up with (I expect it is sub-optimal but it seems to work):
// Generate bang noise data
// Sourced from http://www.rgagnon.com/javadetails/java-0632.html
public static byte[] bang() {
byte[] buf = new byte[8050];
Random r = new Random();
boolean silence = true;
for (int i = 0; i < 8000; i++) {
while (r.nextInt() % 10 != 0) {
buf[i] =
silence ? 0
: (byte) Math.abs(r.nextInt()
% (int) (1. + 63. * (1. + Math.cos(((double) i)
* Math.PI / 8000.))));
i++;
}
silence = !silence;
}
return buf;
}
private static void save(byte[] data, String filename) throws IOException, LineUnavailableException, UnsupportedAudioFileException {
InputStream byteArray = new ByteArrayInputStream(data);
AudioInputStream ais = new AudioInputStream(byteArray, getAudioFormat(), (long) data.length);
AudioSystem.write(ais, AudioFileFormat.Type.WAVE, new File(filename));
}
private static AudioFormat getAudioFormat() {
return new AudioFormat(
8000f, // sampleRate
8, // sampleSizeInBits
1, // channels
true, // signed
false); // bigEndian
}
public static void main(String[] args) throws Exception {
byte[] data = bang();
save(data, "test.wav");
}
I hope it helps.

Categories