I am trying to get two .wav files, convert them to the same audio format and then concatenate. I need only concatenated files and delete the others. The problem is that I can not delete them because AudioInputStreams are not closed. After debugging I discovered that streams are not closed after execution of this method:
private static void convertFilesToSameAudioFormat(String fileName1, String fileName2) {
try (AudioInputStream clip = AudioSystem.getAudioInputStream(new File(fileName2));
AudioInputStream clip1 = AudioSystem.getAudioInputStream(new File(fileName1));
AudioInputStream clip2 = AudioSystem.getAudioInputStream(clip1.getFormat(), clip);) {
AudioSystem.write(clip2,
AudioFileFormat.Type.WAVE,
new File("temp.wav"));
} catch (Exception e) {
e.printStackTrace();
}
}
So after the execution of this method clip, clip1, clip2 can not be deleted, because they are used by the program.
Related
I've been stuck on this for what feels at least a week. I have looked at various solutions to similar issues and related sources / documentation but still don't understand how this is supposed to work. I'm trying to use Java Discord API with CMU Sphinx 4.
In this first chunk of code, the handleUserAudio() method runs when a user speaks in Discord. It fires and sends a userAudio object that contains a 20ms stereo byte[]... every 20ms.
public class SpazVoiceListener extends ListenerAdapter {
// joins channel of user that types "-join", audio from each user is sent to the convertor
public void run(MessageReceivedEvent event) {
VoiceChannel userVoiceChannel = (VoiceChannel) event.getMember().getVoiceState().getChannel();
VoiceFileSaver voiceFileSaver = new VoiceFileSaver();
try {
userVoiceChannel.getGuild().getAudioManager().openAudioConnection(userVoiceChannel);
} catch (Exception e) {
System.out.println("Error connecting to voice channel: " + e.getMessage());
}
try {
userVoiceChannel.getGuild().getAudioManager().setReceivingHandler(new AudioReceiveHandler() {
#Override
public boolean canReceiveCombined() {
return false;
}
#Override
public boolean canReceiveUser() {
return true;
}
#Override
public void handleCombinedAudio(#NotNull CombinedAudio combinedAudio) {
}
#Override
public void handleUserAudio(#NotNull UserAudio userAudio) {
voiceFileSaver.newStream(userAudio);
}
});
} catch (Exception e) {
System.out.println("Error setting Audio Receive Handler: " + e.getMessage());
}
}
}
I've created an object to store the user's User object. I'm also attempting to take the 20ms byte[], convert from stereo to mono, save as .wav, then "concatenate" the .wav files together to create one .wav file with a user's full speaking event. Later I'll run the full .wav file through voice recognition.
The HashMap is just a way for me to keep track of the User objects for checking if it already exists or not, nothing more.
public class VoiceFileSaver {
HashMap<User, String> usersAudioData = new HashMap<>();
public VoiceFileSaver() {
}
public void newStream(UserAudio userAudio) {
try {
AudioInputStream newAudio = convertToMono(userAudio.getAudioData(1));
// If user key exists, the new audio file is added to the existing audio file
if (usersAudioData.containsKey(userAudio.getUser())) {
try {
AudioSystem.write(newAudio, AudioFileFormat.Type.WAVE, new File("src/main/resources/tmp/" + userAudio.getUser().getIdLong() + "TEMP.wav"));
AudioInputStream convertedClip = AudioSystem.getAudioInputStream(new File("src/main/resources/tmp/" + userAudio.getUser().getIdLong() + "TEMP.wav"));
AudioInputStream existingClip = AudioSystem.getAudioInputStream(new File("src/main/resources/tmp/" + userAudio.getUser().getIdLong() + ".wav"));
AudioInputStream appendedAudio = new AudioInputStream(new SequenceInputStream(existingClip, convertedClip), existingClip.getFormat(), existingClip.getFrameLength() + convertedClip.getFrameLength());
AudioSystem.write(appendedAudio, AudioFileFormat.Type.WAVE, new File("src/main/resources/tmp/" + userAudio.getUser().getIdLong() + ".wav"));
} catch (Exception e) {
System.out.println("Error appending audio files:" + e.getMessage());
}
} else {
// If user key does not exist, creates new hashmap entry with new user key and file name as a String
usersAudioData.put(userAudio.getUser(), "src/main/resources/tmp/" + userAudio.getUser().getIdLong() + ".wav");
AudioSystem.write(newAudio, AudioFileFormat.Type.WAVE, new File("src/main/resources/tmp/" + userAudio.getUser().getIdLong() + ".wav"));
}
} catch (Exception e) {
System.out.println("Error converting from stereo to mono: " + e.getMessage());
}
}
public AudioInputStream convertToMono(byte[] audio) {
AudioFormat targetFormat = new AudioFormat(16000f, 16, 1, true, false);
AudioInputStream ais = new AudioInputStream(new ByteArrayInputStream(audio), targetFormat, audio.length);
return ais;
}
}
The original byte[] from userAudio is 48KHz 16bit stereo signed BigEndian PCM.
I want to convert it to a .wav file for CMU Sphinx 4 which is 16KHz 16bit mono LittleEndian. The end result I'm getting is a 20ms static sounding blip. This tells me that first, the audio isn't being converted properly, and second, the files aren't being concatenated properly.
I had previously tested the userAudio byte[] with Clip objects, so I know the 20ms Discord audio clips coming in are really working.
Please let me know if more information is needed and I will provide whatever I can.
I am trying to join two audio files stored in Amazon S3 using Java. Any ideas how to go about it? Should I use S3ObjectInputStream and then AudioInputStream? I tried that but I am getting unsupported file format error although it is a valid wav file. Please advise.
I too am trying to do this. So far this is what I got. We use the inputStream to write to a file because as far as I know there is no good way to take the stream and play from MediaPlayer.
Once we have the file we give the path to MediaPlayer and let it play. However my issues are that the audio file does not download completely. I think this has to deal with the buffer size. Also there is no pause or play so you will still need a GUI for that.
Don't forget to use this in a different thread than the main one.
String bucket = getContext().getString(R.string.bucketName);
String key = "somekey";
InputStream s3ObjectInputStream = s3.getObject(bucket,key).getObjectContent();
byte[] buffer = new byte[1024];
try {
File targetFile = new File(Environment.getExternalStorageDirectory().getAbsolutePath() + "/temp.mp4");
OutputStream outStream = new FileOutputStream(targetFile);
while( (s3ObjectInputStream.read(buffer)) != -1)
{
outStream.write(buffer);
}
MediaPlayer mp = new MediaPlayer();
try {
mp.setDataSource(Environment.getExternalStorageDirectory().getAbsolutePath() + File.separator + "temp.mp4");
mp.prepare();
mp.start();
} catch (Exception e) {
e.printStackTrace();
}
} catch (IOException e) {
e.printStackTrace();
}
Hi I have following java programme that play some sounds.I want to play sounds in order for example after ending of sound1 i want to play sound2 and then sound3 the following is my java code and function of playing sound .
private void playsound(String file)
{
try {
crit = AudioSystem.getClip();
AudioInputStream inputStream1 = AudioSystem.getAudioInputStream(this.getClass().getResource(file));
crit.open(inputStream1);
//if(!crit.isOpen())
{
crit.start();
}
} catch (Exception ex) {
System.out.println(ex.getMessage());
}
}
and calling it as following
playsound("/sounds/filesound1.au");
playsound("/sounds/filesound2.au");
playsound("/sounds/filesound3.au");
the programme is plying sound in parallel which I don't want.I want to play in order
Regards
I got the following code from somewhere that I can't remember right now but it plays the music consequently:
public static void play(ArrayList<String> files){
byte[] buffer = new byte[4096];
for (String filePath : files) {
File file = new File(filePath);
try {
AudioInputStream is = AudioSystem.getAudioInputStream(file);
AudioFormat format = is.getFormat();
SourceDataLine line = AudioSystem.getSourceDataLine(format);
line.open(format);
line.start();
while (is.available() > 0) {
int len = is.read(buffer);
line.write(buffer, 0, len);
}
line.drain();
line.close();
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
The reason this plays the files consequently and not all at the same time is because write blocks until the requested amount of data has been written. This applies even if the requested amount of data to write is greater than the data line's buffer size.
Make sure to include drain() from the code above. drain() waits for the buffer to empty before it close()s.
Even though the sun.audio API says that .wav is a supported file apparently the one that I had must not have been. a .aiff file is now working but not in this way I found a better way thats a little more complicated though.
String strFilename = "C:\\Documents and Settings\\gkehoe\\Network\\GIM\\Explode.aiff";
File soundFile = new File(strFilename);
AudioInputStream audioInputStream = null;
try
{
audioInputStream = AudioSystem.getAudioInputStream(soundFile);
}
catch (Exception e)
{
e.printStackTrace();
}
AudioFormat audioFormat = audioInputStream.getFormat();
SourceDataLine line = null;
DataLine.Info info = new DataLine.Info(SourceDataLine.class,
audioFormat);
try
{
line = (SourceDataLine) AudioSystem.getLine(info);
/*
The line is there, but it is not yet ready to
receive audio data. We have to open the line.
*/
line.open(audioFormat);
}
catch (LineUnavailableException e)
{
e.printStackTrace();
System.exit(1);
}
catch (Exception e)
{
e.printStackTrace();
System.exit(1);
}
line.start();
int nBytesRead = 0;
byte[] abData = new byte[EXTERNAL_BUFFER_SIZE];
while (nBytesRead != -1)
{
try
{
nBytesRead = audioInputStream.read(abData, 0, abData.length);
}
catch (IOException e)
{
e.printStackTrace();
}
if (nBytesRead >= 0)
{
int nBytesWritten = line.write(abData, 0, nBytesRead);
}
}
line.drain();
/*
All data are played. We can close the shop.
*/
line.close();
According to source code it is not recognized as supported file format.
Wav files are supported, but there are many variables, and some of them are not supported.
For example, you might get an unrecognized format exception if the wav is encoded at 48000 instead of 44100, or at 24 or 32 bits instead of 16 bit encoding.
What exact error did you get?
What are the specs (properties) of the wav file?
It is possible to convert from one wav to a compatible wav using a tool such as Audacity. A format that I use for wav files has the following properties:
16-bit encoding
little endian
44100 sample rate
stereo
I didn't really look closely at the code example itself. I like this playback example.
I have a Java application whose UI relies heavily on audio. On Windows and OS X, everything works fine; on Linux, however, the application requires exclusive access to the sound device, a LineUnavailableException is thrown and no sound is heard. I'm using Kubuntu 9.10.
This means that no other application can play audio while the program is running, and can't even be holding an audio device when the program starts. This is naturally unacceptable.
Here is the code I'm using to play audio:
AudioInputStream audioInputStream = AudioSystem.getAudioInputStream(file);
Clip clip = AudioSystem.getClip();
clip.open(audioInputStream);
clip.start();
this.wait((clip.getMicrosecondLength() / 1000) + 100);
clip.stop();
Am I doing something wrong? Is using Java to play audio in Linux a lost cause?
I fear that audio in Linux is a lost cause itself. But in this case, it really is a known Java Bug. You should try to figure out what sound architecture you are using. I think the default for Ubuntu is PulseAudio/ALSA. I'm not not sure about Kubuntu though.
There is a known workaround (I never tried it myself though).
It's also possible that some other applications you're running is exclusively using the soundcard, so make sure to test with different applications (i.e. applications that play nicely with others).
I was able to play audio sound on GNU/Linux (Ubuntu 10.10) using the OpenJDK with some tweaks. I believe the the LineUnavailableException was a bug in PulseAudio and was fixed in 10.10.
I needed to specify the Format (something not needed on Windows).
AudioInputStream audioIn = AudioSystem.getAudioInputStream(in);
// needed for working on GNU/Linux (openjdk) {
AudioFormat format = audioIn.getFormat();
DataLine.Info info = new DataLine.Info(Clip.class, format);
Clip clip = (Clip)AudioSystem.getLine(info);
// }
// on windows, {
//Clip clip = AudioSystem.getClip();
// }
Be aware that the call to Clip.getMicrosecondLength() returns milliseconds.
Java Sound is terrible for high-precision or low-latency tasks, and almost totally dysfunctional on Linux. Abandon ship now before you sink more time into it.
After Java Sound I tried OpenAL which wasn't great on Linux either.
Currently I'm using FMOD which is unfortunately closed-source.
The open source way to go would probably be PortAudio. Try talking to the SIP Communicator devs.
I also tried RtAudio but found it had bugs with its ALSA implementation.
Send an mplayer command through a shell. Most easy solution.
i got this code from somewhere in internet, the sound comes up most time, occasionally doesn't come up
import java.util.*;
import java.text.*;
import java.io.*;
import java.net.*;
import javax.sound.sampled.*;
public class Sound2
{
public static
void main (String name[])
{
playSound ( "somesound.wav" );
}
public static
void playSound (String filename)
{
int BUFFER_SIZE = 128000;
//File soundFile = null;
AudioInputStream audioStream = null;
AudioFormat audioFormat = null;
SourceDataLine sourceLine = null;
try
{
audioStream =
AudioSystem.getAudioInputStream
(
new
BufferedInputStream
(
new FileInputStream ( filename )
)
//soundFileStream
);
}
catch (Exception e)
{
e.printStackTrace();
System.exit(1);
}
audioFormat = audioStream.getFormat();
DataLine.Info info = new DataLine.Info
(
SourceDataLine.class,
audioFormat
);
try
{
sourceLine = (SourceDataLine) AudioSystem.getLine(info);
sourceLine.open(audioFormat);
}
catch (LineUnavailableException e)
{
e.printStackTrace();
System.exit(1);
}
catch (Exception e)
{
e.printStackTrace();
System.exit(1);
}
sourceLine.start();
int nBytesRead = 0;
byte[] abData = new byte[BUFFER_SIZE];
while (nBytesRead != -1)
{
try
{
nBytesRead =
audioStream.read(abData, 0, abData.length);
}
catch (IOException e)
{
e.printStackTrace();
}
if (nBytesRead >= 0)
{
#SuppressWarnings("unused")
int nBytesWritten =
sourceLine.write(abData, 0, nBytesRead);
}
}
sourceLine.drain();
sourceLine.close();
}
}