I wrote a code that plays a .wav file. It works fine.
Now another piece of code gets music data from a audio receiver and keeps appending to that .wav file.
Suppose the audio is of length 5 seconds when i run the player, now in spite of the updating the wav file using the updater code, the player just plays those initial 5 seconds.
The playing code is simple :
try{
AudioInputStream audioInputStream = AudioSystem.getAudioInputStream(new File("junk.wav"));
Clip clip = AudioSystem.getClip();
clip.open(audioInputStream);
clip.start();
}catch(Exception ex){
System.out.println("Error with playing sound.");
ex.printStackTrace();
}
How can I play audio stream just after the input enters microphone jack (some lag permitted)?
You didn't post the question, so answer will be general.
The wav file has a strictly defined format. It contains a header and data (sound samples). The header defines the number of data in the wav file. To play audio the header provides additional information like sample rate. If you open the wav file with AudioInputStream it parses those information. Due to data length defined in header you can't appending the data to wav file. You could modify the wav file data samples but you must be sure the exchanged data has the same format.
When you open the docu for Class AudioInputStream the first statement is:
"An audio input stream is an input stream with a specified audio format and length."
From OS perspective.
Using a file as a buffer in real time player may be a problem. The filesystem is buffered/cached on many levels to provide a fast access to big chunks of memory. The reading/writing a file in the fly may cause even the file corruption. If I understand you would like to make a Circular buffer in the WAV file (overwrite again and again the same samples). You will find additional problems to synchronize the new content of the file (provided with writer) with the Clip which plays it in the loop.
What can you do?
You could use a SourceDataLine / TargetDataLine. Next read the samples in the fly and keep this inside byte buffer (e.g. byte[] or ByteBuffer) instead of file. You must first fill the buffer with incoming data, and later make a read/write in the loop from/to xxxDataLine. You must be aware that line are opened for specific AudioFormat. Use the same format for input and output. Not all formats are supported (it depends on hardware so this is a 'gentle' stuff with Java). Be aware that sometimes data size even if given in bytes must be adjusted to frame size (16bit per sample = 2 bytes).
See "Capturing Audio" in the Java Tutorial's Sound Trail
To understand this section, you will have to read the preceding sections, too. It is not an easy read. But basically, the TargetDataLine, as mentioned by flyer (+1) is key.
I suspect if you append to a .wav while reading from it, you will get a concurrency error.
If you just want to input sound from a mike, and you correctly set up TargetDataLine, you should be able to get pretty low latency.
Related
My goal is to play multiple mp3, ogg, or wav music files at the same time in order to create adaptative music in a video game.
Instead of using the Clip class from the java sound API, I managed to create a class named Track which, thanks to mp3spi, and vorbisspi, can read mp3, ogg, and wav files by writing the audio data in a SourceDataLine.
I found this solution thanks to this post:
https://stackoverflow.com/a/17737483/13326269
Everything works fine with a thread into the stream function so I can create many Track objects to play multiple sounds at the same time. But I want to create a function like void setMicrosecondPosition(long microseconds); from the Clip class.
I tried many things and I found a way to do it :
in = getAudioInputStream(new File(file).getAbsoluteFile());
int seconds = 410;
int bitrate = 320; //in kbit/s
in.skip(seconds*bitrate*1000/8); // divided by 8 because the skip method use bytes.
But I need the bitrate of the file. So, how can I get the bitrate of any sound file? Or better, how can I use mp3 and ogg files with the javax.sound.sampled.Clip class.
I believe if you really do have working AudioInputStreams, the skip function references a number of bytes which should be a fixed amount per frame, determined by the audio format. For example stereo, 16-bit would be 4 bytes per frame, regardless. So you should be able to use the frame rate (e.g., 44100 frames/sec) to skip to the desired start point.
Another dodge would be to read but ignore the bytes incoming from the AudioInputStream in a method similar to the stream method in your link solution's example. You this can pretty quickly get you to the desired point, since you won't be blocked by the sdl write(). But if that works, the skip method should also work and would be preferable.
I am working on programmatically removing/redacting parts of an .mp3 audio file. An example scenario is to mute/ remove the audio from 1:00 - 2:00 in a 3 minute (0:00 - 3:00) audio file. Do we have any libraries that can be useful for this scenario?
I know how to achieve this for .wav audio files using the Javax Sound API (package: javax.sound), but it looks like this API doesn't support .mp3 files
This is how I am thinking to achieve it technically if I were to work with .wav
The audio is composed as frames. Each frame represent a time slot. Use the AudioInputStream read() method to convert audio file to raw audio data (byte[])
Find the frame which represents the start time slot (Using the audioInputStream.getFrameLength() and audioInputStream.getFrameRate() APIs)
Find the frame which represents the end time slot (Using the audioInputStream.getFrameLength() and audioInputStream.getFrameRate() APIs)
Remove the frames between the start and end time slots in the array
Convert the byte array to the AudioInputStream - AudioSystem.getAudioInputStream(ByteArrayInputStream)
References-
AudioInputStream
AudioSystem
I think you are on the right track, as far as .wav files are concerned.
The conversion of frame to time depends on the sample rate. For example 44100 fps puts time 0.5 = frame location 22050. The raw data will probably have multiple bytes per frame. 16 bit encoding is of course 2 bytes, and stereo 16-bit is 4 bytes per frame.
Often it becomes worthwhile to convert the byte data to and from signed PCM floats (ranging -1 to 1) before working on it.
For working with mp3 files, one has to first decompress the file, do your edits, then recompress back to mp3. A library for mp3 coding/decoding can be found at git hub at the following location: pdudits/soundlibs A lot will depend on the format of the sound file prior to its being compressed.
I don't know about the specifics of getting to PCM frames from these tools for .mp3 files. I recall having to tinker with the code in JORBIS to intercept the data prior to its being sent to playback output. I wouldn't be surprised if you would have to go through something similar to get JLayer working for you.
I have an android application, which records WAV files. This files can be up to 3 minutes long, but I need to split them by 30 seconds (This smaller WAV's must be playable too). I don't really care if it is splitted on a silent moment or not. Is there any way to do it?
You can use AudioInputStream and its AudioFileFormat member (which contains an AudioFormat instance) to know what to write (format, sample rate), you can use AudioSystem to write it.
Based on the sample rate of the format you can find out how many bytes of audio are 30 seconds, and go on a loop of reading that many bytes from the AudioInputStream, writing them to a new file.
I'm using a AudioInputStream to feed bytes to a SourceDataLine to play a PCM file. I want to give the user the ability to move a slider to jump to some point in the file.
Issues I'm having:
markSupported() returns false on my AudioInputStream. So I
cannot use my initial approach to call reset() then skip() (which
I already thought was kind of ugly...)
I would really prefer not to tear down the InputStream and create a new one just to jump to a position prior to my current mark.
SourceLineData.getLongFramePosition() does not seem to be very reliable... I know it has a buffer, but even if account for the bytes left in the buffer, i do not understand the behavior
I have considered using a Memory-Mapped File to feed bytes to the line that way I can jump wherever I want, but I don't want to add complexity to the function if I don't have to. Is there a good way to do this that I'm missing? also can anyone explain what the frame number returned by getLongFramePosition() actually means? is it number of frames passed through speakers (does not appear to be)?
Did the BigClip work for you?
If not, here's something that could work. Warning, it is a little convoluted.
http://hexara.com/VSL/VSL2.htm
With this Applet, you can load a wav -- I've loaded wavs longer than 5 minutes and it has worked fine, but audio data does take up a lot of RAM.
Once the WAV is loaded, you can mouse to any point and playback via holding the mouse down and dragging. Obviously, that's not exactly what YOU want to do, since you want to play back from a point, and not worry about dragging or dragging speed. But the path I take to get the data in a playable state should still work for you.
If you have a working AudioInputStream and AudioFileFormat, you can set up an internal array and read the data directly into it. The audio file format gives you the encoding and the length in frames which you can use to calculate your array dimension.
Then, make a TargetDataLine that gets its audio data from the array you made. A TDL has to have a variable that marks where the next read will start. You can use your JSlider to change the contents of that variable to point to whatever frame you want.
If you don't have a working AudioInputStream, there are ways to get one...but it is more convoluted, involving ByteArrayOutput & Input Stream objects. Probably no need to go there. (I use them to read client data in the above Applet.)
As to your question about the current Frame location, the key problem is that the JVM processes audio data in chunks, and it tends to run ahead of what is heard in bursts. I banged my head against this problem for a long while, then came up with this:
http://www.java-gaming.org/index.php/topic,24605.0.html
The cleanest way I could come up with was to use a FileChannel to read the bytes into the SourceDataLine this way based on the user moving a slider I could determine the byte position in the file (making sure to adjust or round it to line up with a frame) and then set that position in the FileChannel and then continue playback. I flushed the line when this happened to keep from playing remaining bytes.
i know how to read bytes from file and save, but how can i read seconds from an audio file and save in a new file? is there any method for this?how many bytes contains 1 second?maybe it sounds stupied but i have no idea.
p.s. i want to record only 20 seconds from an audio file and save this 20 seconds to another file.. i know how to write to a new file, but how to write only some part(20 seconds) of an audio file
thx in advance
roni
It depends on the format you are using for capturing the audio stream. For example if your are using raw (uncompressed) audio, 16 bits per sample, stereo, and 44Khz (which means 44100 samples per second), then you need to store 2*(16/8)*44100 bytes per second. Additionally, if you want to write some kind of standard file that other applications can read, you will need to decide on a container format. For raw (uncompressed) audio, Microsoft wave files are commonly used, and it will require you to write a header with some metadata at the begining of your file.
Update:
You can try using AudioFileWriter.