Sound class sounds layered and screechy on Windows - java

So, when I'm on Mac, this error did not occur. However, when I am on Windows, any sounds I play multiple times over each other start sounding like they are becoming screechy and layering over each other in an unpleasant way.
Here is relevant code from my Sound class:
public class NewerSound {
private boolean stop = true;
private boolean loopable;
private boolean isUrl;
private URL fileUrl;
private Thread sound;
private double volume = 1.0;
public NewerSound(URL url, boolean loopable) throws UnsupportedAudioFileException, IOException {
isUrl = true;
fileUrl = url;
this.loopable = loopable;
}
public void play() {
stop = false;
Runnable r = new Runnable() {
#Override
public void run() {
do {
try {
AudioInputStream in;
if(!isUrl)
in = getAudioInputStream(new File(fileName));
else
in = getAudioInputStream(fileUrl);
final AudioFormat outFormat = getOutFormat(in.getFormat());
final Info info = new Info(SourceDataLine.class, outFormat);
try(final SourceDataLine line = (SourceDataLine) AudioSystem.getLine(info)) {
if(line != null) {
line.open(outFormat);
line.start();
AudioInputStream inputMystream = AudioSystem.getAudioInputStream(outFormat, in);
stream(inputMystream, line);
line.drain();
line.stop();
}
}
}
catch(UnsupportedAudioFileException | LineUnavailableException | IOException e) {
throw new IllegalStateException(e);
}
} while(loopable && !stop);
}
};
sound = new Thread(r);
sound.start();
}
private AudioFormat getOutFormat(AudioFormat inFormat) {
final int ch = inFormat.getChannels();
final float rate = inFormat.getSampleRate();
return new AudioFormat(PCM_SIGNED, rate, 16, ch, ch * 2, rate, false);
}
private void stream(AudioInputStream in, SourceDataLine line) throws IOException {
byte[] buffer = new byte[4];
for(int n = 0; n != -1 && !stop; n = in.read(buffer, 0, buffer.length)) {
byte[] bufferTemp = new byte[buffer.length];
for(int i = 0; i < bufferTemp.length; i += 2) {
short audioSample = (short) ((short) ((buffer[i + 1] & 0xff) << 8) | (buffer[i] & 0xff));
audioSample = (short) (audioSample * volume);
bufferTemp[i] = (byte) audioSample;
bufferTemp[i + 1] = (byte) (audioSample >> 8);
}
buffer = bufferTemp;
line.write(buffer, 0, n);
}
}
}
It is possible that it could be an issue of accessing the same resources when playing the same sound multiple times over itself when I use the NewerSound.play() method.
Please let me know if any other details are needed. Much appreciated :)

The method you are using to change the volume in the method "stream" is flawed. you have 16-bit encoding, thus it takes two bytes to derive a single audio value. You need to assemble the value from the two byte pairs before the multiplication, then take apart the 16-bit result back into two bytes. There are a number of StackOverflow threads with code to do this.
I don't know if this is the whole reason for the problem you describe but it definitely could be, and definitely needs to be fixed.

Related

AudioTrack only playing noise instead of recorded voice

I want to play recorded voice using audio track but its making noise I tried different techniques but unable to solve this issue.
I Changed:
frequency rate, Audio Format Channel Audio Formate Encoding
public class PlayAudio extends AsyncTask<Void, Integer, Void> {
PlayAudio playTask;
String path = Environment.getExternalStorageDirectory().getAbsolutePath() + "/MyFolder/";
String myfile = path + "filename" + ".wav";
File recordingFile = new File(myfile);
boolean isRecording = false,isPlaying = false;
int frequency = 44100 ,channelConfiguration = AudioFormat.CHANNEL_CONFIGURATION_MONO;
int audioEncoding = AudioFormat.ENCODING_PCM_16BIT;
#Override
protected Void doInBackground(Void... params) {
isPlaying = true;
int bufferSize = AudioTrack.getMinBufferSize(frequency,channelConfiguration,audioEncoding);
short[] audiodata = new short[bufferSize / 4];
try {
DataInputStream dis = new DataInputStream(new BufferedInputStream(new FileInputStream(recordingFile)));
AudioTrack audioTrack = new AudioTrack(
AudioManager.STREAM_MUSIC, frequency,
channelConfiguration, audioEncoding, bufferSize,
AudioTrack.MODE_STREAM);
audioTrack.play();
while (isPlaying && dis.available() > 0) {
int i = 0;
while (dis.available() > 0 && i < audiodata.length) {
audiodata[i] = dis.readShort();
i++;
}
audioTrack.write(audiodata, 0, audiodata.length);
}
dis.close();
// startPlaybackButton.setEnabled(false);
// stopPlaybackButton.setEnabled(true);
} catch (Throwable t) {
Log.e("AudioTrack", "Playback Failed");
}
return null;
}
}
I don't know if this is the whole problem, but part of your problem is that you're treating the wav file as if all of it is audio data. In fact, there is a fair amount of meta-data in there. See http://soundfile.sapp.org/doc/WaveFormat/ for more information.
The safest thing to do is to parse the file until you find data block, then read the data block, and then stop (because often there's meta-data that comes after the data block too.
Here's some rough code to give you the idea.
try {
byte[] buffer = new byte[1024];
// First find the data chunk
byte[] bytes = new byte[4];
// Read first 4 bytes.
// (Should be RIFF descriptor.)
// Assume it's ok
is.read(bytes);
// First subchunk will always be at byte 12.
// (There is no other dependable constant.)
is.skip(8);
for (;;) {
// Read each chunk descriptor.
if (is.read(bytes) < 0) {
break;
}
String desc = new String(bytes, "US-ASCII");
// Read chunk length.
if (is.read(bytes) < 0) {
break;
}
int dataLength = (
(bytes[0] & 0xFF) |
((bytes[1] & 0xFF) << 8) |
((bytes[2] & 0xFF) << 16) |
((bytes[3] & 0xFF) << 24));
long length = getUnsignedInt(dataLength);
if (desc.equals("data")){
// Read 'length' bytes
...
public static long getUnsignedInt(int x) {
return x & 0x00000000ffffffffL;
}

how to use javax.sound.sampled.LineListener?

I have a program that will ask the user which songs they want to play out of a list of available songs and after the user selects one once the song finishes it asks the user which song they want to play again. I have been told to use line listener for this but I can't seem to figure out how to even after using the oracle docs
my code
public class Main {
public static void main(String[] args) {
Scanner input = new Scanner(System.in);
String[] pathnames;
File MusicFileChosen;
String musicDir;
boolean songComplete = false;
pathnames = ProgramMap.musicDir.list();
// Print the names of files and directories
for (int ListNum = 0; ListNum < pathnames.length; ListNum++) {
System.out.println(ListNum + 1 + ". " + pathnames[ListNum]);
}
for (int playlistLength = 0; playlistLength < pathnames.length; playlistLength++){
if (!songComplete) {
System.out.println("Which Song would you like to play?");
int musicChoice = input.nextInt();
musicDir = ProgramMap.userDir + "\\src\\Music\\" + pathnames[musicChoice - 1];
MusicFileChosen = new File(musicDir);
PlaySound(MusicFileChosen, pathnames[musicChoice - 1]);
}
}
}
public static void PlaySound(File sound, String FileName){
try{
// Inits the Audio System
Clip clip = AudioSystem.getClip();
AudioInputStream AudioInput = AudioSystem.getAudioInputStream(sound);
//Finds and accesses the clip
clip.open(AudioInput);
//Starts the clip
clip.start();
System.out.println("Now Playing " + FileName);
clip.drain();
}catch (Exception e){
System.out.println("Error playing music");
}
}
}
Basically one thing which you need to change is to replace this:
for (int playlistLength = 0; playlistLength < pathnames.length; playlistLength++){
to something like:
while (true) {
System.out.println("Which Song would you like to play?");
int musicChoice = input.nextInt();
musicDir = ProgramMap.userDir + "\\src\\Music\\" + pathnames[musicChoice - 1];
MusicFileChosen = new File(musicDir);
PlaySound(MusicFileChosen, pathnames[musicChoice - 1]);
}
You can add some logic to break the loop.
Also, I would recommend changing a little bit PlaySound method:
public static void PlaySound(File sound, String FileName) {
try (final AudioInputStream in = getAudioInputStream(sound)) {
final AudioFormat outFormat = getOutFormat(in.getFormat());
Info info = new Info(SourceDataLine.class, outFormat);
try (final SourceDataLine line =
(SourceDataLine) AudioSystem.getLine(info)) {
if (line != null) {
line.open(outFormat);
line.start();
System.out.println("Now Playing " + FileName);
stream(getAudioInputStream(outFormat, in), line);
line.drain();
line.stop();
}
}
} catch (UnsupportedAudioFileException
| LineUnavailableException
| IOException e) {
System.err.println("Error playing music\n" + e.getMessage());
}
}
private static AudioFormat getOutFormat(AudioFormat inFormat) {
final int ch = inFormat.getChannels();
final float rate = inFormat.getSampleRate();
return new AudioFormat(PCM_SIGNED, rate, 16, ch, ch * 2, rate, false);
}
private static void stream(AudioInputStream in, SourceDataLine line)
throws IOException {
final byte[] buffer = new byte[4096];
for (int n = 0; n != -1; n = in.read(buffer, 0, buffer.length)) {
line.write(buffer, 0, n);
}
}
It needs to play MP3 because you can face such a problem:
Unknown frame size.
To add MP3 reading support to Java Sound, add the mp3plugin.jar of the JMF to the run-time classpath of the application. https://www.oracle.com/technetwork/java/javase/download-137625.html

Android, decode mp3, mix few audio and encode to pcm ( output are too fast)

I have question about Android decode mp3, mix few audio and encode to m4a (aac). For that I use Jlayer for android to decode mp3, audiotrack to play song, and MediaCodec with mediaformat to encode pcm. The problem is my output after encode is too fast for example: I should have 5 sec audio mix but instead I got ~ 1,5 sec. I thinking that I lose somewhere audio frames, but I dont sure about that. Thanks for answer.
(output file is ~ 25% faster that should be)
Decode mp3 code:
public void decodeMP3toPCM(Resources res, int resource) throws BitstreamException, DecoderException, IOException {
InputStream inputStream = new BufferedInputStream(res.openRawResource(resource), 1152);
Bitstream bitstream = new Bitstream(inputStream);
Decoder decoder = new Decoder();
boolean done = false;
while (!done) {
Header frameHeader = bitstream.readFrame();
if (frameHeader == null) {
done = true;
} else {
SampleBuffer output = (SampleBuffer) decoder.decodeFrame(frameHeader, bitstream);
mTimeCount += frameHeader.ms_per_frame();
short[] pcm = output.getBuffer();
mDataBuffer.addFrame(mViewId, pcm);
mReadedFrames++;
mAudioTrack.write(pcm, 0, pcm.length);
}
bitstream.closeFrame();
}
inputStream.close();
}
encode:
public class AudioEncoder {
private MediaCodec mediaCodec;
private BufferedOutputStream outputStream;
private String mediaType = "audio/mp4a-latm";
public AudioEncoder(String filePath) throws IOException {
File f = new File(filePath);
touch(f);
try {
outputStream = new BufferedOutputStream(new FileOutputStream(f));
} catch (Exception e) {
e.printStackTrace();
}
try {
//mediaCodec = MediaCodec.createEncoderByType(mediaType);
mediaCodec = MediaCodec.createByCodecName("OMX.google.aac.encoder");
} catch (IOException e) {
e.printStackTrace();
}
mediaCodec = MediaCodec.createEncoderByType(mediaType);
final int kSampleRates[] = { 8000, 11025, 22050, 44100, 48000 };
final int kBitRates[] = { 64000, 128000 };
MediaFormat mediaFormat = MediaFormat.createAudioFormat(mediaType,kSampleRates[3],2);
mediaFormat.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectLC);
mediaFormat.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 4608);
mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, kBitRates[1]);
mediaCodec.configure(mediaFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
mediaCodec.start();
}
public void close() {
try {
mediaCodec.stop();
mediaCodec.release();
outputStream.flush();
outputStream.close();
} catch (Exception e) {
e.printStackTrace();
}
}
public synchronized void offerEncoder(byte[] input) {
Log.e("synchro ", input.length + " is coming");
try {
ByteBuffer[] inputBuffers = mediaCodec.getInputBuffers();
ByteBuffer[] outputBuffers = mediaCodec.getOutputBuffers();
int inputBufferIndex = mediaCodec.dequeueInputBuffer(-1);
if (inputBufferIndex >= 0) {
ByteBuffer inputBuffer = inputBuffers[inputBufferIndex];
inputBuffer.clear();
inputBuffer.put(input);
mediaCodec.queueInputBuffer(inputBufferIndex, 0, input.length, 0, 0);
}
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
int outputBufferIndex = mediaCodec.dequeueOutputBuffer(bufferInfo, 0);
while (outputBufferIndex >= 0) {
int outBitsSize = bufferInfo.size;
int outPacketSize = outBitsSize + 7; // 7 is ADTS size
ByteBuffer outputBuffer = outputBuffers[outputBufferIndex];
outputBuffer.position(bufferInfo.offset);
outputBuffer.limit(bufferInfo.offset + outBitsSize);
byte[] outData = new byte[outPacketSize];
addADTStoPacket(outData, outPacketSize);
outputBuffer.get(outData, 7, outBitsSize);
outputBuffer.position(bufferInfo.offset);
outputStream.write(outData, 0, outData.length);
mediaCodec.releaseOutputBuffer(outputBufferIndex, false);
outputBufferIndex = mediaCodec.dequeueOutputBuffer(bufferInfo, 0);
}
} catch (Throwable t) {
t.printStackTrace();
}
}
private void addADTStoPacket(byte[] packet, int packetLen) {
int profile = 2; //AAC LC
//39=MediaCodecInfo.CodecProfileLevel.AACObjectELD;
int freqIdx = 4; //44.1KHz
int chanCfg = 2; //CPE
// fill in ADTS data
packet[0] = (byte) 0xFF;
packet[1] = (byte) 0xF9;
packet[2] = (byte) (((profile - 1) << 6) + (freqIdx << 2) + (chanCfg >> 2));
packet[3] = (byte) (((chanCfg & 3) << 6) + (packetLen >> 11));
packet[4] = (byte) ((packetLen & 0x7FF) >> 3);
packet[5] = (byte) (((packetLen & 7) << 5) + 0x1F);
packet[6] = (byte) 0xFC;
}
public void touch(File f) {
try {
if (!f.exists())
f.createNewFile();
} catch (IOException e) {
e.printStackTrace();
}
}
}

Split Wave audio file at silence [duplicate]

How can I detect silence when recording operation is started in Java? What is PCM data? How can I calculate PCM data in Java?
I found the solution :
package bemukan.voiceRecognition.speechToText;
import javax.sound.sampled.*;
import java.io.*;
public class RecordAudio {
private File audioFile;
protected boolean running;
private ByteArrayOutputStream out;
private AudioInputStream inputStream;
final static float MAX_8_BITS_SIGNED = Byte.MAX_VALUE;
final static float MAX_8_BITS_UNSIGNED = 0xff;
final static float MAX_16_BITS_SIGNED = Short.MAX_VALUE;
final static float MAX_16_BITS_UNSIGNED = 0xffff;
private AudioFormat format;
private float level;
private int frameSize;
public RecordAudio(){
getFormat();
}
private AudioFormat getFormat() {
File file = new File("src/Facebook/1.wav");
AudioInputStream stream;
try {
stream = AudioSystem.getAudioInputStream(file);
format=stream.getFormat();
frameSize=stream.getFormat().getFrameSize();
return stream.getFormat();
} catch (UnsupportedAudioFileException e) {
} catch (IOException e) {
}
return null;
}
public void stopAudio() {
running = false;
}
public void recordAudio() {
try {
final AudioFormat format = getFormat();
DataLine.Info info = new DataLine.Info(
TargetDataLine.class, format);
final TargetDataLine line = (TargetDataLine)
AudioSystem.getLine(info);
line.open(format);
line.start();
Runnable runner = new Runnable() {
int bufferSize = (int) format.getSampleRate()
* format.getFrameSize();
byte buffer[] = new byte[bufferSize];
public void run() {
int readPoint = 0;
out = new ByteArrayOutputStream();
running = true;
int sum=0;
while (running) {
int count =
line.read(buffer, 0, buffer.length);
calculateLevel(buffer,0,0);
System.out.println(level);
if (count > 0) {
out.write(buffer, 0, count);
}
}
line.stop();
}
};
Thread captureThread = new Thread(runner);
captureThread.start();
} catch (LineUnavailableException e) {
System.err.println("Line unavailable: " + e);
System.exit(-2);
}
}
public File getAudioFile() {
byte[] audio = out.toByteArray();
InputStream input = new ByteArrayInputStream(audio);
try {
final AudioFormat format = getFormat();
final AudioInputStream ais =
new AudioInputStream(input, format,
audio.length / format.getFrameSize());
AudioSystem.write(ais, AudioFileFormat.Type.WAVE, new File("temp.wav"));
input.close();
System.out.println("New file created!");
} catch (IOException e) {
System.out.println(e.getMessage());
}
return new File("temp.wav");
}
private void calculateLevel (byte[] buffer,
int readPoint,
int leftOver) {
int max = 0;
boolean use16Bit = (format.getSampleSizeInBits() == 16);
boolean signed = (format.getEncoding() ==
AudioFormat.Encoding.PCM_SIGNED);
boolean bigEndian = (format.isBigEndian());
if (use16Bit) {
for (int i=readPoint; i<buffer.length-leftOver; i+=2) {
int value = 0;
// deal with endianness
int hiByte = (bigEndian ? buffer[i] : buffer[i+1]);
int loByte = (bigEndian ? buffer[i+1] : buffer [i]);
if (signed) {
short shortVal = (short) hiByte;
shortVal = (short) ((shortVal << 8) | (byte) loByte);
value = shortVal;
} else {
value = (hiByte << 8) | loByte;
}
max = Math.max(max, value);
} // for
} else {
// 8 bit - no endianness issues, just sign
for (int i=readPoint; i<buffer.length-leftOver; i++) {
int value = 0;
if (signed) {
value = buffer [i];
} else {
short shortVal = 0;
shortVal = (short) (shortVal | buffer [i]);
value = shortVal;
}
max = Math.max (max, value);
} // for
} // 8 bit
// express max as float of 0.0 to 1.0 of max value
// of 8 or 16 bits (signed or unsigned)
if (signed) {
if (use16Bit) { level = (float) max / MAX_16_BITS_SIGNED; }
else { level = (float) max / MAX_8_BITS_SIGNED; }
} else {
if (use16Bit) { level = (float) max / MAX_16_BITS_UNSIGNED; }
else { level = (float) max / MAX_8_BITS_UNSIGNED; }
}
} // calculateLevel
}
How can I detect silence when recording operation is started in Java?
Calculate the dB or RMS value for a group of sound frames and decide at what level it is considered to be 'silence'.
What is PCM data?
Data that is in Pulse-code modulation format.
How can I calculate PCM data in Java?
I do not understand that question. But guessing it has something to do with the speech-recognition tag, I have some bad news. This might theoretically be done using the Java Speech API. But there are apparently no 'speech to text' implementations available for the API (only 'text to speech').
I have to calculate rms for speech-recognition project. But I do not know how can I calculate in Java.
For a single channel that is represented by signal sizes in a double ranging from -1 to 1, you might use this method.
/** Computes the RMS volume of a group of signal sizes ranging from -1 to 1. */
public double volumeRMS(double[] raw) {
double sum = 0d;
if (raw.length==0) {
return sum;
} else {
for (int ii=0; ii<raw.length; ii++) {
sum += raw[ii];
}
}
double average = sum/raw.length;
double sumMeanSquare = 0d;
for (int ii=0; ii<raw.length; ii++) {
sumMeanSquare += Math.pow(raw[ii]-average,2d);
}
double averageMeanSquare = sumMeanSquare/raw.length;
double rootMeanSquare = Math.sqrt(averageMeanSquare);
return rootMeanSquare;
}
There is a byte buffer to save input values from the line, and what I should have to do with this buffer?
If using the volumeRMS(double[]) method, convert the byte values to an array of double values ranging from -1 to 1. ;)
You need to catch the value like a number silence is zero or near
Please adapt your code to your requirement!!!
In this case a variable named UMBRAL (Threshold in spanish)...
Suppose that you have access to WAV file like bytes ByteHeader...
private Integer Byte2PosIntBig(byte Byte24, byte Byte16, byte Byte08, byte Byte00) {
return new Integer (
((Byte24) << 24)|
((Byte16 & 0xFF) << 16)|
((Byte08 & 0xFF) << 8)|
((Byte00 & 0xFF) << 0));
}
Before ....
RandomAccessFile RAFSource = new RandomAccessFile("your old file wav", "r");
Begins here...
int PSData = 44;
byte[] Bytes = new byte[4];
byte[] ByteHeader = new byte[44];
RAFSource.seek(0);
RAFSource.read(ByteHeader);
int WavSize = Byte2PosIntBig(ByteHeader[43],ByteHeader[42],ByteHeader[41],ByteHeader[40]);
int NumBits = Byte2PosIntBig(ByteHeader[35],ByteHeader[34]);
int NumByte = NumBits/8;
for (int i = PSData;i < PSData+WavSize;i+=NumByte) {
int WavSample = 0;
int WavResultI =0;
int WavResultO = 0;
if (NumByte == 2) {
RAFSource.seek(i);
Bytes[0] = RAFSource.readByte();
Bytes[1] = RAFSource.readByte();
WavSample = (int)(((Bytes[1]) << 8)|((Bytes[0] & 0xFF) << 0));
if (Math.abs(WavSample) < UMBRAL) {
//SILENCE DETECTED!!!
}
} else {
RAFSource.seek(i);
WavSample = (short)(RAFSource.readByte() & 0xFF);
short sSamT = (short)WavSample;
sSamT += 128;
double dSamD = (double)sSamT*Multiplier;
if ((double)sSamT < UMBRAL) {
//SILENCE DETECTED!!!
}
}

Audio Mixing with Java (without Mixer API)

I am attempting to mix several different audio streams and trying to get them to play at the same time instead of one-at-a-time.
The code below plays them one-at-a-time and I cannot figure out a solution that does not use the Java Mixer API. Unfortunately, my audio card does not support synchronization using the Mixer API and I am forced to figure out a way to do it through code.
Please advise.
/////CODE IS BELOW////
class MixerProgram {
public static AudioFormat monoFormat;
private JFileChooser fileChooser = new JFileChooser();
private static File[] files;
private int trackCount;
private FileInputStream[] fileStreams = new FileInputStream[trackCount];
public static AudioInputStream[] audioInputStream;
private Thread trackThread[] = new Thread[trackCount];
private static DataLine.Info sourceDataLineInfo = null;
private static SourceDataLine[] sourceLine;
public MixerProgram(String[] s)
{
trackCount = s.length;
sourceLine = new SourceDataLine[trackCount];
audioInputStream = new AudioInputStream[trackCount];
files = new File[s.length];
}
public static void getFiles(String[] s)
{
files = new File[s.length];
for(int i=0; i<s.length;i++)
{
File f = new File(s[i]);
if (!f.exists())
System.err.println("Wave file not found: " + filename);
files[i] = f;
}
}
public static void loadAudioFiles(String[] s)
{
AudioInputStream in = null;
audioInputStream = new AudioInputStream[s.length];
sourceLine = new SourceDataLine[s.length];
for(int i=0;i<s.length;i++){
try
{
in = AudioSystem.getAudioInputStream(files[i]);
}
catch(Exception e)
{
System.err.println("Failed to assign audioInputStream");
}
monoFormat = in.getFormat();
AudioFormat decodedFormat = new AudioFormat(
AudioFormat.Encoding.PCM_SIGNED,
monoFormat.getSampleRate(), 16, monoFormat.getChannels(),
monoFormat.getChannels() * 2, monoFormat.getSampleRate(),
false);
monoFormat = decodedFormat; //give back name
audioInputStream[i] = AudioSystem.getAudioInputStream(decodedFormat, in);
sourceDataLineInfo = new DataLine.Info(SourceDataLine.class, monoFormat);
try
{
sourceLine[i] = (SourceDataLine) AudioSystem.getLine(sourceDataLineInfo);
sourceLine[i].open(monoFormat);
}
catch(LineUnavailableException e)
{
System.err.println("Failed to get SourceDataLine" + e);
}
}
}
public static void playAudioMix(String[] s)
{
final int tracks = s.length;
System.out.println(tracks);
Runnable playAudioMixRunner = new Runnable()
{
int bufferSize = (int) monoFormat.getSampleRate() * monoFormat.getFrameSize();
byte[] buffer = new byte[bufferSize];
public void run()
{
if(tracks==0)
return;
for(int i = 0; i < tracks; i++)
{
sourceLine[i].start();
}
int bytesRead = 0;
while(bytesRead != -1)
{
for(int i = 0; i < tracks; i++)
{
try
{
bytesRead = audioInputStream[i].read(buffer, 0, buffer.length);
}
catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
if(bytesRead >= 0)
{
int bytesWritten = sourceLine[i].write(buffer, 0, bytesRead);
System.out.println(bytesWritten);
}
}
}
}
};
Thread playThread = new Thread(playAudioMixRunner);
playThread.start();
}
}
The problem is that you are not adding the samples together. If we are looking at 4 tracks, 16-bit PCM data, you need to add all the different values together to "mix" them into one final output. So, from a purely-numbers point-of-view, it would look like this:
[Track1] 320 -16 2000 200 400
[Track2] 16 8 123 -87 91
[Track3] -16 -34 -356 1200 805
[Track4] 1011 1230 -1230 -100 19
[Final!] 1331 1188 537 1213 1315
In your above code, you should only be writing a single byte array. That byte array is the final mix of all tracks added together. The problem is that you are writing a byte array for each different track (so there is no mixdown happening, as you observed).
If you want to guarantee you don't have any "clipping", you should take the average of all tracks (so add all four tracks above and divide by 4). However, there are artifacts from choosing that approach (like if you have silence on three tracks and one loud track, the final output will be much quiter than the volume of the one track that is not silent). There are more complicated algorithms you can use to do the mixing, but by then you are writing your own mixer :P.

Categories