Ubuntu 16.4, Java, WAV file playback - java

This is a weird thing. I'm trying to playback some sounds via Java AudioSystem and AudioSystem.getClip(). The files are all "PCM_SIGNED, 22.050.0 Hz, 16 bit, mono, 2 bytes/frame, little endian".
On several Ubuntu 16.4 LTS Linux boxes this format is rejected by PulseAudio with an Invalid Format Exception, because the only accepted format is seemingly "PCM_SIGNED, unknown sample rate, 16 bit, stereo, 4 bytes/frame, big endian".
I already tried to re-sample my WAVs in order to match this strange constraint, to no avail. Those are not even accepted anymore by AudioSystem.getAudioInputStream()
Needless to say, that the same works fine on Mac OS and Windows. And there is also no problem to playback these files using the sox library and play file.wav

OK, solved.
Usually if one asks, how to playback WAV using Java, this is the most common answer:
AudioInputStream audioInputStream = AudioSystem.getAudioInputStream(DragonflyApp.class.getResource("/resources/" + soundFile));
clip = AudioSystem.getClip();
clip.addLineListener(e -> {
if (e.getType() == LineEvent.Type.STOP) {
// Do something on end of playback
}
});
clip.open(audioInputStream);
clip.start();
Unfortunately on some Linux systems this ends up in an "Invalid Format" exception, thrown by PulseAudio, which claims to be unable to playback the simplest WAV file (see above).
The workaround is to use this sequence under Linux instead. It generally does also work on MacOS, but the final "STOP" indication comes very late (roughly 5s after playback end), so I make a conditional execution here:
This works on Linux (at least on Ubunutu 16.04) with clips, which have formerly been rejected by PulseAudio:
DataPusher datapusher = null;
DataLine.Info lineinfo = null;
SourceDataLine sourcedataline = null;
lineinfo = new DataLine.Info(SourceDataLine.class, audioInputStream.getFormat());
if (!(AudioSystem.isLineSupported(lineinfo))) {
return;
}
sourcedataline = (SourceDataLine) AudioSystem.getLine(lineinfo);
sourcedataline.addLineListener(e -> {
if (e.getType() == LineEvent.Type.STOP) {
// Do something on end of playback
}
});
datapusher = new DataPusher(sourcedataline, audioInputStream);
datapusher.start();
Both code snippets are used conditionally:
if (System.getProperty("os.name").equals("Mac OS X")) {
// The clip solution
}
else {
// The datapusher solution
}
Hope, that helps others, who will also have this problem.

Related

LineUnavailableException when loading audio clip with Java on RPi

My RPi is Zero W and using Raspbian Jessie 4.9.35 / Oracle JDK8 1.8.0_65.
I'm having problem with loading audio clips on Java Program with RPi.
I have two audio files named "piano_0.wav" and "piano_1.wav" and they are different audio files.
I used this to load the clip:
Clip loadClip(String path) {
Clip clip = null;
try {
clip = AudioSystem.getClip();
AudioInputStream stream = AudioSystem.getAudioInputStream(new File(path));
clip.open(stream);
} catch (Exception e) {
Logger.getLogger(MusicManager.class.getName()).log(Level.SEVERE, null, e);
}
return clip;
}
When I try to load piano_0.wav, there's no error logs and can play returned Clip. but When I try to load piano_1.wav program throws exception:
javax.sound.sampled.LineUnavailableException: line with format PCM_SIGNED 44100.0 Hz, 16 bit, stereo, 4 bytes/frame, little-endian not supported.
at com.sun.media.sound.DirectAudioDevice$DirectDL.implOpen(DirectAudioDevice.java:513)
at com.sun.media.sound.DirectAudioDevice$DirectClip.implOpen(DirectAudioDevice.java:1304)
at com.sun.media.sound.AbstractDataLine.open(AbstractDataLine.java:121)
at com.sun.media.sound.DirectAudioDevice$DirectClip.open(DirectAudioDevice.java:1085)
at com.sun.media.sound.DirectAudioDevice$DirectClip.open(DirectAudioDevice.java:1175)
at beatstairscmd.BeatStairsCMD.testMusicClip(BeatStairsCMD.java:81)
at beatstairscmd.BeatStairsCMD.main(BeatStairsCMD.java:42)
But when I used this code on my Desktop there's no problem to load more clips.
I already tried solutions with init the clip with other ways instead of AudioSystem.getClip() and there's no changes.
How should I fix this problem?

Issues with SourceDataLine format support

I have an application written in Java in which I need to play audio. I used OpenAL (with java-openal library) for the task however I would like to use WSOLA which is not supported by OpenAL directly. I found a nice java-native library called TarsosDSP which has support for WSOLA.
The library uses standard Java APIs for audio output. The issue occurs during the SourceDataLine setup:
IllegalArgumentException: No line matching interface SourceDataLine supporting format PCM_UNSIGNED 16000.0 Hz, 16 bit, mono, 2 bytes/frame, little-endian is supported.
I made sure the issue is not caused by the lack of permissions (ran it as root on Linux + tried it on Windows 10) and there are no other SourceDataLines used in the project.
After tinkering with the format I found out that the format is accepted when it's changed from PCM_UNSIGNED to PCM_SIGNED. It seems like a minor issue since only moving the byte range form unsigned to signed should be pretty easy. However it's weird that it's not supported natively.
So, is there some solution in which I wouldn't have to modify the source data?
Thanks, Jan
You don't have to move the byte range by hand. After you've created an AudioInputStream, you create another AudioInputStream, with a signed format and that is connected to the first unsigned stream. If you then read the data using the signed stream, the Sound API automatically converts the format. This way you don't need to modify the source data.
File fileWithUnsignedFormat;
AudioInputStream sourceInputStream;
AudioInputStream targetInputStream;
AudioFormat sourceFormat;
AudioFormat targetFormat;
SourceDataLine sourceDataLine;
sourceInputStream = AudioSystem.getAudioInputStream(fileWithUnsignedFormat);
sourceFormat = sourceInputStream.getFormat();
targetFormat = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED,
sourceFormat.getSampleRate(),
sourceFormat.getSampleSizeInBits(),
sourceFormat.getChannels(),
sourceFormat.getFrameSize(),
sourceFormat.getFrameRate(),
false);
targetInputStream = AudioSystem.getAudioInputStream(targetFormat, sourceInputStream);
DataLine.Info dataLineInfo = new DataLine.Info(SourceDataLine.class, targetFormat);
sourceDataLine = (SourceDataLine) AudioSystem.getLine(dataLineInfo);
sourceDataLine.open(targetFormat);
sourceLine.start();
// schematic
targetInputStream.read(byteArray, 0, byteArray.length);
sourceDataLine.write(byteArray, 0, byteArray.length);

Java - Sound Won't Load - AudioInputStream [duplicate]

The message on the shell is:
Exception in thread "main" java.lang.IllegalArgumentException: Invalid format
at org.classpath.icedtea.pulseaudio.PulseAudioDataLine.createStream(PulseAudioDataLine.java:142)
at org.classpath.icedtea.pulseaudio.PulseAudioDataLine.open(PulseAudioDataLine.java:99)
at org.classpath.icedtea.pulseaudio.PulseAudioDataLine.open(PulseAudioDataLine.java:283)
at org.classpath.icedtea.pulseaudio.PulseAudioClip.open(PulseAudioClip.java:402)
at org.classpath.icedtea.pulseaudio.PulseAudioClip.open(PulseAudioClip.java:453)
at reprod.ReproducirFichero(reprod.java:16)
at reprod.main(reprod.java:44)
I try to download new drivers for audio, i try to reinstall openJDK 7 and openJRE 7 and also i try to install java 7.
I have proved my code in another computer and it works, the desktop board that i use is an intel d525mw, the audio format that i´m trying to play is .wav.The version of linux that I use is Ubuntu 12.04.3.Please I need help.Thanks
here is party of my code, and i try to play a .wav audio format
import javax.sound.sampled.*;
public class reprod {
public static void play(){
try {
Clip cl = AudioSystem.getClip();
File f = new File("/home/usr/Desktop/d.wav");
AudioInputStream ais = AudioSystem.getAudioInputStream(f);
cl.open(ais);
cl.start();
System.out.println("playing...");
while (cl.isRunning())
Thread.sleep(4000);
cl.close();
the version of linux that I use is Ubuntu 12.04.3
I solved the problem by simply passing the parameter null into AudioSystem.getClip().
I don't know why this exception occured, I run this project before on Windows, and it worked... After on Linux and here, it didn't work.
I had the same problem and found this code to work:
File soundFile = new File("/home/usr/Desktop/d.wav");
AudioInputStream soundIn = AudioSystem.getAudioInputStream(soundFile);
AudioFormat format = soundIn.getFormat();
DataLine.Info info = new DataLine.Info(Clip.class, format);
Clip clip = (Clip)AudioSystem.getLine(info);
clip.open(soundIn);
clip.start();
while(clip.isRunning())
{
Thread.yield();
}
The key is in soundIn.getFormat(). To quote the docs:
Obtains the audio format of the sound data in this audio input stream.
Source: http://ubuntuforums.org/showthread.php?t=1469572
The error message says that the input file format is wrong somehow.
If you gave us more information (file format, maybe where you got it, code that you use to open the file and how you configured the audio drivers), we might be able to help.
See this question for some code that you can try: How to play .wav files with java

One use Java audio?

I am building a speech synthesizer, and everything works except the audio. I have a list of phonemes that are stored as .wav files, and I am calling them with AudioInputStreams, but they won't repeat. I have no idea what could be the issue, so any help would be appreciated.
The code that initializes a HashMap full of phones is
for(File phone : listOfFiles){
String path = phone.getPath();
if(path.startsWith(".")){continue;}
path = path.replace(".wav", "").replace("phones/", "");
AudioInputStream clip1 = AudioSystem.getAudioInputStream(phone);
phonemes.put(path,clip1);
}
and the code that combines and outputs the sound is
public void speak(String[] input){
AudioInputStream phrase = phonemes.get(input[0]);
AudioInputStream phone;
int x = input.length;
for(int i=1; i<input.length; i++){
phone = phonemes.get(input[i]);
phrase = new AudioInputStream(new SequenceInputStream(phrase, phone), phrase.getFormat(), phrase.getFrameLength() + phone.getFrameLength());
}
try {
Clip clip = AudioSystem.getClip();
clip.open(phrase);
clip.start();
} catch (Exception e) {
e.printStackTrace();
}
}
To replay a Clip, you have to stop it and reposition it, then start it. I don't think you can close and reopen a given Clip. But attempts to do that should have generated a LineUnavailable exception, and you say you got no exceptions.
To troubleshoot, I'd first verify that it is possible to play the .wav files prior to placing them in the hash table. Sometimes an unexpected format (e.g., 24-bit or 32-bit encoding, or big-endian rather than little-endian) can lead to .wav files not playing.
If you are trying to concatenate a series of clips or audio data into a single clip, that could also be problematic. I think that AudioInputStream expects a single set of "header" data from the .wav file, but the SequenceInputStream could in effect be sending multiple "headers", one for each source file. I've never seen concatenation attempted like that before.
You might need to make your own data storage for the raw audio for each phoneme, and then build your combined phonemes from that rather than directly from .wav files. Instead of loading to Clips, load the raw PCM from the AudioInputStream into byte arrays. To output the raw audio bytes, you can use a SourceDataLine.

recording timestamps of audio samples while recording using a java applicatin

I've made a java audio recorder and would like to know the system timestamp of each audio sample that I record. I am recording for 1sec at 44.1KHz. For each sample (theres 441000) I would like to record the time (system timestamp) that the microphone detected the sound. How would I do this, if it is possible? I would like an accuracy of +-1ms.
this is a snap shot of the code im using.
AudioFormat format = new AudioFormat(44100f, 8, 1, true, false);
DataLine.Info info = new DataLine.Info(TargetDataLine.class, format);
TargetDataLine line = (TargetDataLine) AudioSystem.getLine(info);
line.open(format);
line.start();
byte[] buff = new byte[line.getBufferSize()];
while(recording){
int index = line.read(buff, 0, buff.length);
out.write(buff, 0, index);
}
line.stop();
line.close();
byte[] audio = out.toByteArray();
Thanks
Edit
Getting a timestamp ever other sample, or even every 10 samples would be ok as long as its accurate. Also i meant 44100* samples.
Actually, recording the start time is the simplest answer. You can always determine the start and stop time of any sound with incredible accuracy (per frame!) by using the frame count. At 44100 frames per second, if your analysis shows that pitch A starts at frame 22050 and ends 33075 for example, then you know that the sound went from exactly (start + 500) milliseconds to (start + 750) milliseconds. It's just a simple multiply operation.
Are you using some sort of Fast Fourier analysis to get the pitches?
It is possible to tie event-notification to Lines. Check out LineEvent and LineListener, in javax.sound.sampled api.
By the way, using frame count is possibly more accurate than time-stamping, due to vagaries introduced by JVM time-slicing. Java gives continuous/accurate playback of sound a high priority, but few real-time guarantees otherwise.

Categories