try {
//String location = dir1.getCanonicalPath()+"\\app_yamb_test1\\mySound.au";
//displayMessage(location);
AudioInputStream audio2 = AudioSystem.getAudioInputStream(getClass().getResourceAsStream("mySound.au"));
Clip clip2 = AudioSystem.getClip();
clip2.open(audio2);
clip2.start();
} catch (UnsupportedAudioFileException uae) {
System.out.println(uae);
JOptionPane.showMessageDialog(null, uae.toString());
} catch (IOException ioe) {
System.out.println("Couldn't find it");
JOptionPane.showMessageDialog(null, ioe.toString());
} catch (LineUnavailableException lua) {
System.out.println(lua);
JOptionPane.showMessageDialog(null, lua.toString());
}
This code works fine when I run the application from netbeans. The sound plays and there are no exceptions. However, when I run it from the dist folder, the sound does not play and I get the java.io.IOException: mark/reset not supported in my message dialog.
How can I fix this?
The documentation for AudioSystem.getAudioInputStream(InputStream) says:
The implementation of this method may
require multiple parsers to examine
the stream to determine whether they
support it. These parsers must be able
to mark the stream, read enough data
to determine whether they support the
stream, and, if not, reset the
stream's read pointer to its original
position. If the input stream does not
support these operation, this method
may fail with an IOException.
Therefore, the stream you provide to this method must support the optional mark/reset functionality. Decorate your resource stream with a BufferedInputStream.
//read audio data from whatever source (file/classloader/etc.)
InputStream audioSrc = getClass().getResourceAsStream("mySound.au");
//add buffer for mark/reset support
InputStream bufferedIn = new BufferedInputStream(audioSrc);
AudioInputStream audioStream = AudioSystem.getAudioInputStream(bufferedIn);
After floundering about for a while and referencing this page many times, I stumbled across this which helped me with my problem. I was initially able to load a wav file, but subsequently only could play it once, because it could not rewind it due to the "mark/reset not supported" error. It was maddening.
The linked code reads an AudioInputStream from a file, then puts the AudioInputStream into a BufferedInputStream, then puts that back into the AudioInputStream like so:
audioInputStream = AudioSystem.getAudioInputStream(new File(filename));
BufferedInputStream bufferedInputStream = new BufferedInputStream(audioInputStream);
audioInputStream = new AudioInputStream(bufferedInputStream, audioInputStream.getFormat(), audioInputStream.getFrameLength());
And then finally it converts the read data to a PCM encoding:
audioInputStream = convertToPCM(audioInputStream);
With convertToPCM defined as:
private static AudioInputStream convertToPCM(AudioInputStream audioInputStream)
{
AudioFormat m_format = audioInputStream.getFormat();
if ((m_format.getEncoding() != AudioFormat.Encoding.PCM_SIGNED) &&
(m_format.getEncoding() != AudioFormat.Encoding.PCM_UNSIGNED))
{
AudioFormat targetFormat = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED,
m_format.getSampleRate(), 16,
m_format.getChannels(), m_format.getChannels() * 2,
m_format.getSampleRate(), m_format.isBigEndian());
audioInputStream = AudioSystem.getAudioInputStream(targetFormat, audioInputStream);
}
return audioInputStream;
}
I believe they do this because BufferedInputStream handles mark/reset better than audioInputStream. Hope this helps somebody out there.
Just came across this question from someone else with the same problem who referenced it. Looks like this issue arose with Java 7.
Oracle Bug database, #7095006
A test, executed when InputStream is the argument to the getAudioInputStream() method, is triggering the error. The existence of Mark/Reset capabilities in the audio resource file have no bearing on whether the Clip will load and play. Given that, there is no reason to prefer an InputStream as the argument, when a URL or File suffice.
If we substitute a URL as the argument, this needless test is not executed. Revising the OP code:
AudioInputStream ais = AudioSystem.getAudioInputStream(getClass().getResource(fileName));
Details can be seen in the API, in the description text for the two forms.
AudioSystem.getAudioInputStream(InputStream)
AudioSystem.getAudioInputStream(URL)
The problem is that you're input stream has to support the methods mark and reset. At least if mark is supported you can test with: AudioInputStream#markSupported.
So you should maybe use a different InputStream.
Related
I am trying to obtain an AudioInputStream from a file in a specific format. When I open the file in fileStream I get a positive frameLength meaning the file can be read and the AudioInputStream effectively has data. Now I want to convert it with AudioSystem with AudioSystem.getAudioInputStream(format, fileStream); and the obtained stream audioStream has a frameLength of -1. I even made sure the conversion is supported and it sure is.
public void openFile() throws IOException, UnsupportedAudioFileException, LineUnavailableException {
AudioInputStream fileStream = AudioSystem.getAudioInputStream(audioFile);
AudioFormat format = new AudioFormat(sampleRate, 8, 1, true, true);
boolean supported = AudioSystem.isConversionSupported(format, fileStream.getFormat());
if(supported) {
audioStream = AudioSystem.getAudioInputStream(format, fileStream);
System.out.println("Opened file: " + audioFile.getName());
}else{
System.out.println("Couldn't open file: " + audioFile.getName());
throw new IOException();
}
}
I am completely stuck and I haven't found anyone with the same exact problem. I welcome any suggestions of different libraries but I would prefer to keep using this one as I'm used to it.
Okay figured it out, I'm dumb.
A frameLength of -1 is, according to the AudioSystem constants, NOT_SPECIFIED which means that, when converting audio, you lose that information.
I was thinking that was the problem because of another unrelated problem.
I am writing a utility application using open source java based PDFBox to convert PDF file containing 'Hyperlink to open an mp3 file' to replace it with sound object.
I used PDFBox API since it appears to be mature enough to work with Sound object. I could read the PDF file and find the hyperlink with reference to mp3. But I am not able to replace it with sound object. I created the Sound Object and associate with action but it does not work. I think I am missing some important part how to create Sound object using PDActionSound object. Is it possible to refer to external wav file using PDFBox API?
for (PDPage pdPage : pages) {
List<PDAnnotation> annotations = pdPage.getAnnotations();
for (PDAnnotation pdAnnotation : annotations) {
if (pdAnnotation instanceof PDAnnotationLink) {
PDAnnotationLink link = ((PDAnnotationLink) pdAnnotation);
PDAction action = link.getAction();
if (action instanceof PDActionLaunch) {
PDActionLaunch launch = ((PDActionLaunch) action);
String fileInfo = launch.getFile().getFile();
if (fileInfo.contains(".mp3")) {
/* create Sound object referring to external mp3*/
//something like
PDActionSound actionSound = new PDActionSound(
soundStream);
//set the ActionSound to the link.
link.setAction(actionSound);
}
}
}
}
}
How to create sound object (PDActionSound) and add to link successfully?
Speaking of mature, that part has never been used, and now that I had a closer look at the code, I think some work remains to be done... Please try this, I created this with PDFBox 2.0 after reading the PDF specification:
PDSimpleFileSpecification fileSpec = new PDSimpleFileSpecification(new COSString("/C/dir1/dir2/blah.mp3")); // see "File Specification Strings" in PDF spec
COSStream soundStream = new COSStream();
soundStream.createOutputStream().close();
soundStream.setItem(COSName.F, fileSpec);
soundStream.setInt(COSName.R, 44100); // put actual sample rate here
PDActionSound actionSound = new PDActionSound();
actionSound.getCOSObject().setItem(COSName.getPDFName("Sound"), soundStream));
link.setAction(actionSound); // reassign the new action to the link annotation
edit: as the above didn't work, here's an alternative solution as requested in the comments. The file is embedded. It works only with .WAV files, and you have to know details of them. About 1/2 seconds are lost at the beginning. The sound you should hear is "I am Al Bundy". I tried with MP3 and didn't succeed. While googling, I found some texts saying that only "old" formats (wav, aif etc) are supported. I did find another way to play sounds ("Renditions") that even worked with embedded mp3 in another product, but the generated structure in the PDF is even more complex.
COSStream soundStream = new COSStream();
OutputStream os = soundStream.createOutputStream(COSName.FLATE_DECODE);
URL url = new URL("http://cd.textfiles.com/hackchronii/WAV/ALBUNDY1.WAV");
InputStream is = url.openStream();
// FileInputStream is = new FileInputStream(".....WAV");
IOUtils.copy(is, os);
is.close();
os.close();
// See p. 506 in PDF spec, Table 294
soundStream.setInt(COSName.C, 1); // channels
soundStream.setInt(COSName.R, 22050); // sampling rate
//soundStream.setString(COSName.E, "Signed"); // The encoding format for the sample data
soundStream.setInt(COSName.B, 8); // The number of bits per sample value per channel. Default value: 8
// soundStream.setName(COSName.CO, "MP3"); // doesn't work
PDActionSound actionSound = new PDActionSound();
actionSound.getCOSObject().setItem(COSName.getPDFName("Sound"), soundStream);
link.setAction(actionSound);
Update 9.7.2016:
We discussed this on the PDFBox mailing list, and thanks to Gilad Denneboom we know two more things:
1) in Adobe Acrobat it only lets you select either WAV or AIF files
2) code by Gilad Denneboom with MP3SPI to convert MP3 to raw:
private static InputStream getAudioStream(String filename) throws Exception {
File file = new File(filename);
AudioInputStream in = AudioSystem.getAudioInputStream(file);
AudioFormat baseFormat = in.getFormat();
AudioFormat decodedFormat = new AudioFormat(
AudioFormat.Encoding.PCM_UNSIGNED,
baseFormat.getSampleRate(),
baseFormat.getSampleSizeInBits(),
baseFormat.getChannels(),
baseFormat.getChannels(),
baseFormat.getSampleRate(),
false);
return AudioSystem.getAudioInputStream(decodedFormat, in);
}
Revised/Summary:
I'm using a plugin to decode an MP3 audio file. I'd like to provide a ProgressMonitor to provide feedback to the user. The logic of constructing an AudioInputStream that decodes the MP3 format AudioFile is as follows:
readAudioFile(File pAudioFile) throws UnsupportedAuioFileException, IOException {
AudioInputStream nativeFormatStream = AudioSystem.getAudioInputStream(pAudioFile);
AudioInputStream desiredFormatStream = AudioSystem.getAudioInputStream(AUDIO_OUTPUT_FORMAT,nativeFormatStream);
int bytesRead, bufferLength;
byte[] rawAudioBuffer[bufferLength=4096];
bytesRead=desiredFormatStream.read(rawAudioBuffer,0,bufferLength));
...
}
First attempt was to wrap the audio File with a ProgressMontorInputStream, then get the AudioInputStream from that:
readAudioFile(File pAudioFile) throws UnsupportedAuioFileException, IOException {
ProgressMonitorInputStream monitorStream = new ProgressMonitorInputStream(COMP,"Decoding",new FileInputStream(pAudioFile);
AudioInputStream nativeFormatStream = AudioSystem.getAudioInputStream(monitorStream);
AudioInputStream desiredFormatStream = AudioSystem.getAudioInputStream(AUDIO_OUTPUT_FORMAT,nativeFormatStream);
int bytesRead, bufferLength;
byte[] rawAudioBuffer[bufferLength=4096];
bytesRead=desiredFormatStream.read(rawAudioBuffer,0,bufferLength));
...
}
While it builds, upon execution I get the following when constructing the AudioInputStream from the ProgressMonitorInputStream:
java.io.IOException: mark/reset not supported
Comments below confirm that the AudioInputStream requires the InputStream it wraps to support the mark() and reset() methods, which apparently ProgressMonitorInputStream does not.
Another suggestion below is to wrap the ProgressMonitorInputStream with a BufferedInputStream (which does support mark/reset). So then I have:
readAudioFile(File pAudioFile) throws UnsupportedAuioFileException, IOException {
ProgressMonitorInputStream monitorStream = new ProgressMonitorInputStream(COMP,"Decoding",new FileInputStream(pAudioFile);
AudioInputStream nativeFormatStream = AudioSystem.getAudioInputStream(new BufferedInputStream(monitorStream));
AudioInputStream desiredFormatStream = AudioSystem.getAudioInputStream(AUDIO_OUTPUT_FORMAT,nativeFormatStream);
int bytesRead, bufferLength;
byte[] rawAudioBuffer[bufferLength=4096];
bytesRead=desiredFormatStream.read(rawAudioBuffer,0,bufferLength));
...
}
Now this builds and executes without error. However, the ProgressMonitor never appears, despite aggressive settings for setMillisToPopup(10) and setMillisToDecideToPopup(10); My theory is that the time to actually read the undecoded audio into memory is still faster than 10mSec. The time is actually spent decoding that raw audio after reading from disk. So the next step is to wrap the undecoded AudioInputStream with the ProgressMonitorInputStream before constructing the decoding AudioInputStream:
readAudioFile(File pAudioFile) throws UnsupportedAuioFileException, IOException {
AudioInputStream nativeFormatStream = AudioSystem.getAudioInputStream(pAudioFile);
AudioInputStream desiredFormatStream = AudioSystem.getAudioInputStream(AUDIO_OUTPUT_FORMAT,new BufferedInputStream(new ProgressMonitorInputStream(COMP,"Decoding",nativeFormatStream);
int bytesRead, bufferLength;
byte[] rawAudioBuffer[bufferLength=4096];
bytesRead=desiredFormatStream.read(rawAudioBuffer,0,bufferLength));
...
}
I seem to be kicking the can down the road but not making progress. Is there any workaround for this problem? Is there an alternative way to providing a ProgressMonitor for the decoding process? My (unsatisfying) fallback is displaying a busy cursor. Any suggestions for other ways to accomplish the goal - providing visual feedback to the user with at least an estimate of time remaining to complete the decoding?
Here is my code that concatenates four wav files and produces wavAppended.wav. This concatenated file nicely plays in Windows Media Player.
But through the PlaySound class, only the one.wav can be heard.
Can anyone help?
class PlaySound extends Object implements LineListener
{
File soundFile;
JDialog playingDialog;
Clip clip;
public void PlaySnd(String s) throws Exception
{
JFileChooser chooser = new JFileChooser();
soundFile = new File(s);
Line.Info linfo = new Line.Info(Clip.class);
Line line = AudioSystem.getLine(linfo);
clip = (Clip) line;
clip.addLineListener(this);
AudioInputStream ais = AudioSystem.getAudioInputStream(soundFile);
clip.open(ais);
clip.start();
}
public void update(LineEvent le)
{
LineEvent.Type type = le.getType();
playingDialog.setVisible(false);
clip.stop();
clip.close();
}
}
public class Main
{
public static void main(String[] args)
{
int i;
String wavFile[] = new String[4];
wavFile[0] = "D://one.wav";
wavFile[1] = "D://two.wav";
wavFile[2] = "D://three.wav";
wavFile[3] = "D://space.au";
AudioInputStream appendedFiles;
try
{
AudioInputStream clip0=AudioSystem.getAudioInputStream(new File(wavFile[0]));
AudioInputStream clip1=AudioSystem.getAudioInputStream(new File(wavFile[1]));
AudioInputStream clip3;
for (i=0;i<4;i++)
{
appendedFiles = new AudioInputStream(
new SequenceInputStream(clip0, clip1),
clip0.getFormat(),
clip0.getFrameLength() + clip1.getFrameLength());
AudioSystem.write(appendedFiles, AudioFileFormat.Type.WAVE, new File("D:\\wavAppended.wav"));
clip3 = AudioSystem.getAudioInputStream(new File("D:\\wavAppended.wav"));
clip0=clip3;
clip1 = AudioSystem.getAudioInputStream(new File(wavFile[i+2]));
}
PlaySound p = new PlaySound();
p.PlaySnd("D://wavAppended.wav");
}
catch (Exception e)
{
e.printStackTrace();
}
}
}
WAV files don't work that way -- you can't just throw multiple files together (same as you can't concatenate JPEG images, for instance), as there's a header on the data, and there are multiple different formats the data may be in. I'm surprised that the file loads at all.
To get you started with the WAV processing you may have a look at my small project. It can copy and paste WAV files together based on an time index file. The project should contain all the Java WAV processing you need (using javax.sound.sampled). The Butcher implementation and Composer contain the actual processing.
The idea is simple: take input audio files and create a index of words
contained in these files. The index entry is the word, start time and
end time. When a new sentence is created it will be stitched together
with single words taken from the index.
The AudioInputStream is the main class to interact with the Java Sound
API. You read audio data from it. If you create audio data you do this
by creating a AudioInputStream the AudioSystem can read from. The
actual encoding is done by the AudioSystem implementation depending on
the output audio format.
The Butcher class is the one concerned with audio files. It can read
and write audio files and create AudioInputStreams from an input byte
array. The other interesting think the Butcher can is cutting samples
from a AudioInputStream. The AudioInputStream consists of frames that
represent the samples of the PCM signal. Frames have a length of
multiple bytes. To cut a valid range of frames from the
AudioInputStream one has to take the frame size into account. The
start and end time in milliseconds have to be translated to start byte
and end bytes of the start frame and end frame. (The start and end
data is stored as timestamps to keep them independent from the
underlying encoding of the file used.)
The Composer creates the output file. For a given sentence it takes
the audio data for each word from the input files, concatenates the
audio data and writes the result to disk.
In the end you'll need some understanding of the PCM and the WAV format. The Java sound API does not abstract that away.
In above given example you need to use the SequenceInputStream then it will work fine. please find my code below to join two files.
import java.io.File;
import java.io.IOException;
import java.io.SequenceInputStream;
import javax.sound.sampled.AudioFileFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
public class JoinWav{
public static void main(String... args) throws Exception{
String wav_1 = "1497434542598100215.wav";
String wav_2 = "104860397153760.wav";
AudioInputStream stream_1 = AudioSystem.getAudioInputStream(new File(wav_1));
AudioInputStream stream_2 = AudioSystem.getAudioInputStream(new File(wav_2));
System.out.println("Info : Format ["+stream_1.getFormat()+"] Frame Length ["+stream_1.getFrameLength()+"]");
AudioInputStream stream_join = new AudioInputStream(new SequenceInputStream(stream_1,stream_2),stream_1.getFormat(),stream_1.getFrameLength()+stream_2.getFrameLength());
AudioSystem.write(stream_join,AudioFileFormat.Type.WAVE,new File("join.wav"));
}
}
About a year ago I started to built an application for android.
Now when I try to run it I get an exception about AudioInputStream class, After a short research that I did using GOOGLE I found out that android doesn't support this class anymore...
Is their any alternative for it?
This is the code that I wrote:
private void merge2WavFiles(String wavFile1, String wavFile2, String newWavFilePath) {
try {
File wave1 = new File(wavFile1);
if(!wave1.exists())
throw new Exception(wave1.getPath() + " - File Not Found");
AudioInputStream clip1 = AudioSystem.getAudioInputStream(wave1);
AudioInputStream clip2 = AudioSystem.getAudioInputStream(new File(wavFile2));
AudioInputStream emptyClip =
AudioSystem.getAudioInputStream(new File(emptyWavPath));
AudioInputStream appendedFiles =
new AudioInputStream(
new SequenceInputStream(clip1, emptyClip),
clip1.getFormat(),
clip1.getFrameLength() + 100
);
clip1 = appendedFiles;
appendedFiles =
new AudioInputStream(
new SequenceInputStream(clip1, clip2),
clip1.getFormat(),
clip1.getFrameLength() + clip2.getFrameLength()
);
AudioSystem.write(appendedFiles, AudioFileFormat.Type.WAVE, new File(newWavFilePath));
} catch (Exception e) {
e.printStackTrace();
}
}
I also have similar problems to yours while developing a frequency generating methods set on android, so I digged everywhere alot in the APIs of android references and java se 1.7 docs. But there do not exist easily-exchangeable alternatives of class AudioInputStream and even also class AudioSystem found in your code.
If you want use your legacy you may have to revise and refactor several items. In my cases, I used InputStream and ByteArrayInputStream (java.io) for that, and recording and systematic some actions will be managed by AudioRecord and AudioManager (android.media). Note that AudioFormat in android and java7 are different its internal characteristics. If I cleared my issues upon the audio manipulation, I will attach a piece of sample code for you.