I need to play a part of an MP3 file in my java code. I wish to do this via a function which accepts the start and stop time in millisecs.
JLayer contains a class called AdvancedPlayer which has a method that accepts the start and stop position in frames:
/**
* Plays a range of MPEG audio frames
* #param start The first frame to play
* #param end The last frame to play
* #return true if the last frame was played, or false if there are more frames.
*/
public boolean play(final int start, final int end) throws JavaLayerException
{
boolean ret = true;
int offset = start;
while (offset-- > 0 && ret) ret = skipFrame();
return play(end - start);
}
According to this, a frame lasts 26millisecs. However I need a finer degree of control than this, i.e. I may wish to play from 40millisecs to 50millisecs.
How can I do this? Do I need to convert the MP3 to .wav first?
The solution I used in the end was to first write the code to play a part of a wave file (i.e. from xxx ms to xxx ms) as I also need support for this file format. Here's the code for that:
File soundFile = new File(this.audioFilePath);
AudioInputStream originalAudioInputStream = AudioSystem.getAudioInputStream(soundFile);
AudioFormat audioFormat = originalAudioInputStream.getFormat();
float startInBytes = (startTimeinMs / 1000 * audioFormat.getSampleRate() * audioFormat.getFrameSize());
float lengthInFrames = ((endTimeinMs - startTimeinMs) / 1000 * audioFormat.getSampleRate());
originalAudioInputStream.skip((long) startInBytes);
AudioInputStream partAudioInputStream = new AudioInputStream(originalAudioInputStream,
originalAudioInputStream.getFormat(), (long) lengthInFrames);
// code to actually play the audio input stream here
Once this was working I wrote this code to convert an MP3 to a temporary wave file (which I can then use with the above code) - this is using JLayer and MP3SPI. I did try simply performing the above directly on the converted audio stream without first writing out to a file but couldn't get it to work. I'm only using small MP3 files that convert/write out instantly so I'm happy with this solution.
File soundFile = new File(this.inputFilePath);
AudioInputStream mp3InputStream = AudioSystem.getAudioInputStream(soundFile);
AudioFormat baseFormat = mp3InputStream.getFormat();
AudioFormat decodedFormat = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, baseFormat.getSampleRate(), 16, baseFormat.getChannels(), baseFormat.getChannels() * 2, baseFormat.getSampleRate(), false);
AudioInputStream convertedAudioInputStream = AudioSystem.getAudioInputStream(decodedFormat, mp3InputStream);
File outputFile = new File(this.outputFilePath);
AudioSystem.write(convertedAudioInputStream, AudioFileFormat.Type.WAVE, outputFile);
If 26 milliseconds is the finest resolution you can achieve in an MP3 file, then you're out of luck. Converting it to WAV might work, but the source data (i.e. the MP3) stil has that basic resolution limit.
Out of curiosity, why do you want to play 10 milliseconds worth of audio?
Related
Problem: Wav file loads and is processed by AudioDispatcher, but no sound plays.
First, the permissions:
public void checkPermissions() {
if (PackageManager.PERMISSION_GRANTED != ContextCompat.checkSelfPermission(this.requireContext(), Manifest.permission.RECORD_AUDIO)) {
//When permission is not granted by user, show them message why this permission is needed.
if (ActivityCompat.shouldShowRequestPermissionRationale(this.requireActivity(), Manifest.permission.RECORD_AUDIO)) {
Toast.makeText(this.getContext(), "Please grant permissions to record audio", Toast.LENGTH_LONG).show();
//Give user option to still opt-in the permissions
}
ActivityCompat.requestPermissions(this.requireActivity(), new String[]{Manifest.permission.RECORD_AUDIO}, MY_PERMISSIONS_RECORD_AUDIO);
launchProfile();
}
//If permission is granted, then proceed
else if (ContextCompat.checkSelfPermission(this.requireContext(), Manifest.permission.RECORD_AUDIO) == PackageManager.PERMISSION_GRANTED) {
launchProfile();
}
}
Then the launchProfile() function:
public void launchProfile() {
AudioMethods.test(getActivity().getApplicationContext());
//Other fragments load after this that actually do things with the audio file, but
//I want to get this working before anything else runs.
}
Then the AudioMethods.test function:
public static void test(Context context){
String fileName = "audio-samples/samplefile.wav";
try{
releaseStaticDispatcher(dispatcher);
TarsosDSPAudioFormat tarsosDSPAudioFormat = new TarsosDSPAudioFormat(TarsosDSPAudioFormat.Encoding.PCM_SIGNED,
22050,
16, //based on the screenshot from Audacity, should this be 32?
1,
2,
22050,
ByteOrder.BIG_ENDIAN.equals(ByteOrder.nativeOrder()));
AssetManager assetManager = context.getAssets();
AssetFileDescriptor fileDescriptor = assetManager.openFd(fileName);
InputStream stream = fileDescriptor.createInputStream();
dispatcher = new AudioDispatcher(new UniversalAudioInputStream(stream, tarsosDSPAudioFormat),1024,512);
//Not playing sound for some reason...
final AudioProcessor playerProcessor = new AndroidAudioPlayer(tarsosDSPAudioFormat, 22050, AudioManager.STREAM_MUSIC);
dispatcher.addAudioProcessor(playerProcessor);
dispatcher.run();
Thread audioThread = new Thread(dispatcher, "Test Audio Thread");
audioThread.start();
} catch (Exception e) {
e.printStackTrace();
}
}
Console output. No errors, just the warning:
W/AudioTrack: Use of stream types is deprecated for operations other than volume control
See the documentation of AudioTrack() for what to use instead with android.media.AudioAttributes to qualify your playback use case
D/AudioTrack: stop(38): called with 12288 frames delivered
Because the AudioTrack is delivering frames, and there aren't any runtime errors, I'm assuming I'm just missing something dumb by either not having sufficient permissions or I've missed something in setting up my AndroidAudioPlayer. I got the 22050 number by opening the file in Audacity and looking at the stats there:
Any help is appreciated! Thanks :)
Okay, I figured this out.
I'll address my questions as the appeared originally:
TarsosDSPAudioFormat tarsosDSPAudioFormat = new TarsosDSPAudioFormat(TarsosDSPAudioFormat.Encoding.PCM_SIGNED,
22050,
16, //based on the screenshot from Audacity, should this be 32?
1,
2,
22050,
ByteOrder.BIG_ENDIAN.equals(ByteOrder.nativeOrder()));
ANS: No. Per the following TarsosDSP AndroidAudioPlayer header (copied below), I'm limited to 16:
/**
* Constructs a new AndroidAudioPlayer from an audio format, default buffer size and stream type.
*
* #param audioFormat The audio format of the stream that this AndroidAudioPlayer will process.
* This can only be 1 channel, PCM 16 bit.
* #param bufferSizeInSamples The requested buffer size in samples.
* #param streamType The type of audio stream that the internal AudioTrack should use. For
* example, {#link AudioManager#STREAM_MUSIC}.
* #throws IllegalArgumentException if audioFormat is not valid or if the requested buffer size is invalid.
* #see AudioTrack
*/
The following modifications needed to be made to the test() method (this worked for me):
public static void test(Context context){
String fileName = "audio-samples/samplefile.wav";
try{
releaseStaticDispatcher(dispatcher);
TarsosDSPAudioFormat tarsosDSPAudioFormat = new TarsosDSPAudioFormat(TarsosDSPAudioFormat.Encoding.PCM_SIGNED,
22050,
16,
1,
2,
22050,
ByteOrder.BIG_ENDIAN.equals(ByteOrder.nativeOrder()));
AssetManager assetManager = context.getAssets();
AssetFileDescriptor fileDescriptor = assetManager.openFd(fileName);
FileInputStream stream = fileDescriptor.createInputStream();
dispatcher = new AudioDispatcher(new UniversalAudioInputStream(stream, tarsosDSPAudioFormat),2048,1024); //2048 corresponds to the buffer size in samples, 1024 is the buffer overlap and should just be half of the 'buffer size in samples' number (so...1024)
AudioProcessor playerProcessor = new customAudioPlayer(tarsosDSPAudioFormat, 2048); //again, 2048 is the buffer size in samples
dispatcher.addAudioProcessor(playerProcessor);
dispatcher.run();
Thread audioThread = new Thread(dispatcher, "Test Audio Thread");
audioThread.start();
} catch (Exception e) {
e.printStackTrace();
}
}
You'll notice I now create a 'customAudioPlayer', which is, in reality copy-pasted straight from TarsosDSP AndroidAudioPlayer with two small adjustments:
I hardcoded the stream type in the AudioAttributes .Builder() method so am no longer passing them in.
I'm using the AudioTrack.Builder() method because using stream types for playback was deprecated. Admittedly, I'm not sure if this was the change that fixed it, or if it was the change to the buffer size (or both?).
/*
* Constructs a new AndroidAudioPlayer from an audio format, default buffer size and stream type.
*
* #param audioFormat The audio format of the stream that this AndroidAudioPlayer will process.
* This can only be 1 channel, PCM 16 bit.
* #param bufferSizeInSamples The requested buffer size in samples.
* #throws IllegalArgumentException if audioFormat is not valid or if the requested buffer size is invalid.
* #see AudioTrack
*/
public customAudioPlayer(TarsosDSPAudioFormat audioFormat, int bufferSizeInSamples) {
if (audioFormat.getChannels() != 1) {
throw new IllegalArgumentException("TarsosDSP only supports mono audio channel count: " + audioFormat.getChannels());
}
// The requested sample rate
int sampleRate = (int) audioFormat.getSampleRate();
//The buffer size in bytes is twice the buffer size expressed in samples if 16bit samples are used:
int bufferSizeInBytes = bufferSizeInSamples * audioFormat.getSampleSizeInBits()/8;
// From the Android API about getMinBufferSize():
// The total size (in bytes) of the internal buffer where audio data is read from for playback.
// If track's creation mode is MODE_STREAM, you can write data into this buffer in chunks less than or equal to this size,
// and it is typical to use chunks of 1/2 of the total size to permit double-buffering. If the track's creation mode is MODE_STATIC,
// this is the maximum length sample, or audio clip, that can be played by this instance. See getMinBufferSize(int, int, int) to determine
// the minimum required buffer size for the successful creation of an AudioTrack instance in streaming mode. Using values smaller
// than getMinBufferSize() will result in an initialization failure.
int minBufferSizeInBytes = AudioTrack.getMinBufferSize(sampleRate, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT);
if(minBufferSizeInBytes > bufferSizeInBytes){
throw new IllegalArgumentException("The buffer size should be at least " + (minBufferSizeInBytes/(audioFormat.getSampleSizeInBits()/8)) + " (samples) according to AudioTrack.getMinBufferSize().");
}
//http://developer.android.com/reference/android/media/AudioTrack.html#AudioTrack(int, int, int, int, int, int)
//audioTrack = new AudioTrack(streamType, sampleRate, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSizeInBytes,AudioTrack.MODE_STREAM);
try {
audioTrack = new AudioTrack.Builder()
.setAudioAttributes(new AudioAttributes.Builder()
.setUsage(AudioAttributes.USAGE_MEDIA)
.setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
.build())
.setAudioFormat(new AudioFormat.Builder()
.setEncoding(AudioFormat.ENCODING_PCM_16BIT)
.setSampleRate(sampleRate)
.setChannelMask(AudioFormat.CHANNEL_OUT_MONO)
.build())
.setBufferSizeInBytes(bufferSizeInBytes)
.build();
audioTrack.play();
} catch (Exception e) {
e.printStackTrace();
}
}
Also, on my device I noticed that the volume control rocker switches just control the ringer volume by default. I had to open an audio menu (three little dots once the ringer volume was 'active') to turn up the media volume.
Good day. I'm trying to create a music identification app (like Shazam) in Flutter (I'm also new to Flutter) and I want it to run on mobile and desktop.
I have this piece of code in Java that gives me back a byte array with the time domain values in it :
File soundFile;
AudioInputStream audioStream;
AudioFormat audioFormat;
SourceDataLine sourceLine;
int check = 0;
byte[] songBytes;
DataLine.Info info;
soundFile = new File("./testWave.wav");
songBytes = new byte[(int) soundFile.length()];
audioStream = AudioSystem.getAudioInputStream(soundFile);
audioFormat = audioStream.getFormat();
info = new DataLine.Info(SourceDataLine.class, audioFormat);
sourceLine = (SourceDataLine) AudioSystem.getLine(info);
sourceLine.open(audioFormat);
sourceLine.start();
while (check > -1) {
check = audioStream.read(songBytes, 0, songBytes.length);
}
sourceLine.drain();
sourceLine.close();
for (int i = 0; i < songBytes.length; i++) {
System.out.println(songBytes[i]);
}
I have searched and could not find any way to do this in Flutter/Dart. Can anyone please give me guidance on whats the best way of doing this in Flutter/Dart if it is possible and if not can you please advise me on the best method of doing this
Let's say your WAV header is 74 bytes long. (It will vary according to the number of sections, so really you need to parse it to determine that. But for any one source of WAV files it will often be the same number - use a hex dump to determine the offset of the data block plus 4.)
(By parsing the header you can find out other things like the sample rate and whether it's mono or stereo, etc.)
Then, if bytes is the Uint8List, you need bytes.buffer.asInt16List(74). This means: interpret the buffer backing the bytes as signed shorts, but starting at offset 74 - after the header.
var dataOffset = 74; // parse the WAV header or determine from a hex dump
var bytes = await file.readAsBytes();
var shorts = bytes.buffer.asInt16List(dataOffset);
print(shorts[0]); // the first sample of audio
print(shorts.length); // the number of audio samples
I have been trying to manually read a wav file in Java and read an array of bytes then write to an audio buffer for playback. I am receiving playback but it is heavily distorted. Java sound supports 16 bit sample rates but not 24-bit.
I went in to Logic 9 and exported a 24-bit audio file in to 16-bit and then used with my program. Originally, the 24-bit samples would produces white noise. Now I can hear my sample but very distorted and sounds like it has been bit crushed.
Can anyone help me to get a clean signal?
I am very new to audio programming but I am currently working on a basic Digital Audio Workstation.
import javax.sound.sampled.*;
import javax.sound.sampled.DataLine.Info;
import javax.swing.filechooser.FileNameExtensionFilter;
import java.io.*;
public class AudioData {
private String filepath;
private String filepath1;
private File file;
private byte [] fileContent;
private Mixer mixer;
private Mixer.Info[] mixInfos;
private AudioInputStream input;
private ByteArrayOutputStream byteoutput;
public static void main (String [] args) {
AudioData audiodata = new AudioData();
}
public AudioData () {
filepath = "/Users/ivaannagen/Documents/Samples/Engineering Samples - Obscure Techno Vol 3 (WAV)/ES_OT3_Kit03_Gmin_130bpm/ES_OT3_Kit03_FX_Fast_Snare_Riser_Gmin_130bpm.wav";
filepath1 = "/Users/ivaannagen/Documents/Samples/dawsampletest.wav";
file = new File (filepath1);
readAudio();
}
public void readAudio () {
mixInfos = AudioSystem.getMixerInfo();
mixer = AudioSystem.getMixer(mixInfos[0]);
AudioFormat format = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, 44100, 16, 2, 4, 44100, false);
// set up an audio format.
try {
DataLine.Info info = new DataLine.Info(SourceDataLine.class, format); // creates data line with class type and audio format.
SourceDataLine source = (SourceDataLine) AudioSystem.getLine(info);
System.out.println("Size of data line buffer: " + source.getBufferSize());
fileContent = new byte [source.getBufferSize() / 50];
byteoutput = new ByteArrayOutputStream();
input = AudioSystem.getAudioInputStream(file);
int readBytes = 0;
while ((readBytes = input.read(fileContent, 0, fileContent.length)) != -1) {
byteoutput.write(fileContent, 0, readBytes);
}
System.out.println("Size of audio buffer: " + fileContent.length);
//byteoutput.write(0);
// byteoutput.write(0);
System.out.println("Size of audio buffer: " + byteoutput.size());
source.open(format, source.getBufferSize()); // line must be open to be recognised by the mixer.
Line[] lines = mixer.getSourceLines();
System.out.println("mixer lines: " + lines.length);
// for(byte bytes: fileContent) {
// System.out.println(bytes);
// }
Thread playback = new Thread () {
public void run () {
// System.out.println((byteoutput.size() +2) % 4);
source.start(); // play (buffer originally empty)
source.write(byteoutput.toByteArray(), 0, byteoutput.size()); // write input bytes to output buffer
} // end run (to do).
}; // end thread action
playback.start(); // start thread
}
catch (LineUnavailableException lue) {
System.out.println(lue.getMessage());
}
catch (FileNotFoundException fnfe) {
System.out.println(fnfe.getMessage());
}
catch(IOException ioe) {
System.out.println(ioe.getMessage());
}
catch(UnsupportedAudioFileException uafe) {
System.out.println(uafe.getMessage());
}
}
}
Whether or not you can load and play a 24-bit file is system dependent, afaik.
I use Audacity for conversions. You should be able import your file into Audacity and export it as 16-bit, stereo, little-endian, 44100 fps, and then load that export with Java's AudioInputStream.
What you hear when playing from Audacity or from Java should be pretty much identical (adjusting for volume). If not, the most likely reason probably pertains to a mistake or overlook in the code, which is very easy to do.
The use of a ByteOutputStream in your code is superfluous. Read from the AudioInputStream into a fixed-size byte array (size being the buffer length, I recommend trying 8 or 16 * 1024 bytes as a first try) and then use the SourceDataLine write method to ship that array.
Following is code that works on my system for loading a playing a "CD Quality" wav called "a3.wav" that I have that is in the same directory as the Java class. You should be able to swap in your own 44100, 16-bit, stereo, little-endian wav file.
I've commented out an attempt to load and play a 24-bit wav file called "spoken8000_24.wav". That attempt gave me an IllegalArgumentException: No line matching interface SourceDataLine supporting format PCM_SIGNED 8000.0 Hz, 24 bit, stereo, 6 bytes/frame, little-endian is supported.
I have to admit, I'm unclear if my system doesn't provide the needed line or if I might have coded the format incorrectly! My OS can certainly play the file. So I'm thinking there is a distinction between what an OS can do and what a "Mixer" on a given system provides to Java.
As a get-around, I just always convert everything to "CD Quality" format, as that seems to be the most widely supported.
public class TriggerSound_SDL extends JFrame
{
public TriggerSound_SDL()
{
JButton button = new JButton("Play Sound");
button.addActionListener(e -> new Thread(() -> playBuzzer()).start());
getContentPane().add(button);
}
private void playBuzzer()
{
try
{
URL url;
url = getClass().getResource("a3.wav");
// url = getClass().getResource("spoken8000_24.wav");
AudioInputStream ais = AudioSystem.getAudioInputStream(url);
System.out.println(ais.getFormat());
AudioFormat audioFmt;
// "CD Quality" 44100 fps, 16-bit, stereo, little endian
audioFmt = new AudioFormat(
AudioFormat.Encoding.PCM_SIGNED,
44100, 16, 2, 4, 44100, false);
// 8000 fps, 32-bit, stereo
// audioFmt = new AudioFormat(
// AudioFormat.Encoding.PCM_SIGNED,
// 8000, 24, 2, 6, 8000, false);
Info info = new DataLine.Info(SourceDataLine.class,
audioFmt);
SourceDataLine sdl = (SourceDataLine)AudioSystem.getLine(info);
int bufferSize = 16 * 1024;
byte[] buffer = new byte[bufferSize];
sdl.open(audioFmt, bufferSize);
sdl.start();
int numBytesRead = 0;
while((numBytesRead = ais.read(buffer)) != -1)
{
sdl.write(buffer, 0, numBytesRead);
}
}
catch (IOException | UnsupportedAudioFileException
| LineUnavailableException ex)
{
ex.printStackTrace();
}
}
private static void createAndShowGUI()
{
JFrame frame = new TriggerSound_SDL();
frame.setDefaultCloseOperation(DISPOSE_ON_CLOSE);
frame.pack();
frame.setVisible(true);
}
public static void main(String[] args)
{
SwingUtilities.invokeLater(() -> createAndShowGUI());
}
}
This code, with some small tweaks should let you at least test the different formats.
EDIT:
I'm seeing where your goal is to make a DAW!
In that case, you will want to convert the bytes to PCM data. Can I suggest you borrow some code from AudioCue? I basically wrote it to be a Clip-substitute, and part of that involved making the PCM data available for manipulation. Some techniques for mixing, playing back at different frequencies, multithreading can be found in it.
Thanks for all the advice guys. I will be getting rid of the ByteOutputStream and just use the AudioInputStream, I now understand what I was doing was unnecessary!! Thanks for the advice all! I have indeed tried using AudioCue but it is not low level enough for what I want to do!
One more thing guys. Previously, I created a multitrack media player which is using the Clip class. To play all the audio tracks together, I was looping through a list of Clips and playing them. However, this means that all tracks may be playing a tiny amount after each other due to the processing of the loop. Also, Clip class created a new thread per audio. I do not wants 100 threads running on 100 tracks, I want one thread for my audio output. I am still trying to work out how to start all tracks at the same time without a loop....(im guessing AudioCue have nailed the concurrent cues).
Does anyone know the best way to play multiple audio tracks in to one output? Do I need to route/bus all my audio tracks in to one output and somehow write all data from audio files in to one output buffer then play this output in a thread?
Thanks!!
AudioInputStream in = AudioSystem.getAudioInputStream(fileAudio);
AudioFormat audFormat = in.getFormat();
audFormat = new AudioFormat(
AudioFormat.Encoding.PCM_SIGNED,
audFormat.getSampleRate(),
16,
audFormat.getChannels(),
audFormat.getChannels() * 2,
audFormat.getSampleRate(),
false);
din = AudioSystem.getAudioInputStream(audFormat, in);
thats the code for me trying to get the raw data, however it is throwing the error: could not get audio input stream from input file. This seems to be the case for mp3s only (only tested on wavs and mp3's). ive added mp3plugin and mp3spi, as others have suggested. Also it seems to work on when getting a file from a JFileChooser, however when hard coding the file it falls over.
code for me creating the file:
private final String sFolderPath = "C:\\Users\\michael\\Music\\MUSIC\\Bushido.mp3";
File fileAudio = new File(sFolderPath);
Here is my code that concatenates four wav files and produces wavAppended.wav. This concatenated file nicely plays in Windows Media Player.
But through the PlaySound class, only the one.wav can be heard.
Can anyone help?
class PlaySound extends Object implements LineListener
{
File soundFile;
JDialog playingDialog;
Clip clip;
public void PlaySnd(String s) throws Exception
{
JFileChooser chooser = new JFileChooser();
soundFile = new File(s);
Line.Info linfo = new Line.Info(Clip.class);
Line line = AudioSystem.getLine(linfo);
clip = (Clip) line;
clip.addLineListener(this);
AudioInputStream ais = AudioSystem.getAudioInputStream(soundFile);
clip.open(ais);
clip.start();
}
public void update(LineEvent le)
{
LineEvent.Type type = le.getType();
playingDialog.setVisible(false);
clip.stop();
clip.close();
}
}
public class Main
{
public static void main(String[] args)
{
int i;
String wavFile[] = new String[4];
wavFile[0] = "D://one.wav";
wavFile[1] = "D://two.wav";
wavFile[2] = "D://three.wav";
wavFile[3] = "D://space.au";
AudioInputStream appendedFiles;
try
{
AudioInputStream clip0=AudioSystem.getAudioInputStream(new File(wavFile[0]));
AudioInputStream clip1=AudioSystem.getAudioInputStream(new File(wavFile[1]));
AudioInputStream clip3;
for (i=0;i<4;i++)
{
appendedFiles = new AudioInputStream(
new SequenceInputStream(clip0, clip1),
clip0.getFormat(),
clip0.getFrameLength() + clip1.getFrameLength());
AudioSystem.write(appendedFiles, AudioFileFormat.Type.WAVE, new File("D:\\wavAppended.wav"));
clip3 = AudioSystem.getAudioInputStream(new File("D:\\wavAppended.wav"));
clip0=clip3;
clip1 = AudioSystem.getAudioInputStream(new File(wavFile[i+2]));
}
PlaySound p = new PlaySound();
p.PlaySnd("D://wavAppended.wav");
}
catch (Exception e)
{
e.printStackTrace();
}
}
}
WAV files don't work that way -- you can't just throw multiple files together (same as you can't concatenate JPEG images, for instance), as there's a header on the data, and there are multiple different formats the data may be in. I'm surprised that the file loads at all.
To get you started with the WAV processing you may have a look at my small project. It can copy and paste WAV files together based on an time index file. The project should contain all the Java WAV processing you need (using javax.sound.sampled). The Butcher implementation and Composer contain the actual processing.
The idea is simple: take input audio files and create a index of words
contained in these files. The index entry is the word, start time and
end time. When a new sentence is created it will be stitched together
with single words taken from the index.
The AudioInputStream is the main class to interact with the Java Sound
API. You read audio data from it. If you create audio data you do this
by creating a AudioInputStream the AudioSystem can read from. The
actual encoding is done by the AudioSystem implementation depending on
the output audio format.
The Butcher class is the one concerned with audio files. It can read
and write audio files and create AudioInputStreams from an input byte
array. The other interesting think the Butcher can is cutting samples
from a AudioInputStream. The AudioInputStream consists of frames that
represent the samples of the PCM signal. Frames have a length of
multiple bytes. To cut a valid range of frames from the
AudioInputStream one has to take the frame size into account. The
start and end time in milliseconds have to be translated to start byte
and end bytes of the start frame and end frame. (The start and end
data is stored as timestamps to keep them independent from the
underlying encoding of the file used.)
The Composer creates the output file. For a given sentence it takes
the audio data for each word from the input files, concatenates the
audio data and writes the result to disk.
In the end you'll need some understanding of the PCM and the WAV format. The Java sound API does not abstract that away.
In above given example you need to use the SequenceInputStream then it will work fine. please find my code below to join two files.
import java.io.File;
import java.io.IOException;
import java.io.SequenceInputStream;
import javax.sound.sampled.AudioFileFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
public class JoinWav{
public static void main(String... args) throws Exception{
String wav_1 = "1497434542598100215.wav";
String wav_2 = "104860397153760.wav";
AudioInputStream stream_1 = AudioSystem.getAudioInputStream(new File(wav_1));
AudioInputStream stream_2 = AudioSystem.getAudioInputStream(new File(wav_2));
System.out.println("Info : Format ["+stream_1.getFormat()+"] Frame Length ["+stream_1.getFrameLength()+"]");
AudioInputStream stream_join = new AudioInputStream(new SequenceInputStream(stream_1,stream_2),stream_1.getFormat(),stream_1.getFrameLength()+stream_2.getFrameLength());
AudioSystem.write(stream_join,AudioFileFormat.Type.WAVE,new File("join.wav"));
}
}