cut audio data in the stream with java - java

i know how to read bytes from file and save, but how can i read seconds from an audio file and save in a new file? is there any method for this?how many bytes contains 1 second?maybe it sounds stupied but i have no idea.
p.s. i want to record only 20 seconds from an audio file and save this 20 seconds to another file.. i know how to write to a new file, but how to write only some part(20 seconds) of an audio file
thx in advance
roni

It depends on the format you are using for capturing the audio stream. For example if your are using raw (uncompressed) audio, 16 bits per sample, stereo, and 44Khz (which means 44100 samples per second), then you need to store 2*(16/8)*44100 bytes per second. Additionally, if you want to write some kind of standard file that other applications can read, you will need to decide on a container format. For raw (uncompressed) audio, Microsoft wave files are commonly used, and it will require you to write a header with some metadata at the begining of your file.
Update:
You can try using AudioFileWriter.

Related

Java using mp3, ogg and wav files with javax.sound.sampled.Clip (or getting the bitrate of the sound file)

My goal is to play multiple mp3, ogg, or wav music files at the same time in order to create adaptative music in a video game.
Instead of using the Clip class from the java sound API, I managed to create a class named Track which, thanks to mp3spi, and vorbisspi, can read mp3, ogg, and wav files by writing the audio data in a SourceDataLine.
I found this solution thanks to this post:
https://stackoverflow.com/a/17737483/13326269
Everything works fine with a thread into the stream function so I can create many Track objects to play multiple sounds at the same time. But I want to create a function like void setMicrosecondPosition(long microseconds); from the Clip class.
I tried many things and I found a way to do it :
in = getAudioInputStream(new File(file).getAbsoluteFile());
int seconds = 410;
int bitrate = 320; //in kbit/s
in.skip(seconds*bitrate*1000/8); // divided by 8 because the skip method use bytes.
But I need the bitrate of the file. So, how can I get the bitrate of any sound file? Or better, how can I use mp3 and ogg files with the javax.sound.sampled.Clip class.
I believe if you really do have working AudioInputStreams, the skip function references a number of bytes which should be a fixed amount per frame, determined by the audio format. For example stereo, 16-bit would be 4 bytes per frame, regardless. So you should be able to use the frame rate (e.g., 44100 frames/sec) to skip to the desired start point.
Another dodge would be to read but ignore the bytes incoming from the AudioInputStream in a method similar to the stream method in your link solution's example. You this can pretty quickly get you to the desired point, since you won't be blocked by the sdl write(). But if that works, the skip method should also work and would be preferable.

Is there a Java library to programmatically remove parts of an .mp3 audio file?

I am working on programmatically removing/redacting parts of an .mp3 audio file. An example scenario is to mute/ remove the audio from 1:00 - 2:00 in a 3 minute (0:00 - 3:00) audio file. Do we have any libraries that can be useful for this scenario?
I know how to achieve this for .wav audio files using the Javax Sound API (package: javax.sound), but it looks like this API doesn't support .mp3 files
This is how I am thinking to achieve it technically if I were to work with .wav
The audio is composed as frames. Each frame represent a time slot. Use the AudioInputStream read() method to convert audio file to raw audio data (byte[])
Find the frame which represents the start time slot (Using the audioInputStream.getFrameLength() and audioInputStream.getFrameRate() APIs)
Find the frame which represents the end time slot (Using the audioInputStream.getFrameLength() and audioInputStream.getFrameRate() APIs)
Remove the frames between the start and end time slots in the array
Convert the byte array to the AudioInputStream - AudioSystem.getAudioInputStream(ByteArrayInputStream)
References-
AudioInputStream
AudioSystem
I think you are on the right track, as far as .wav files are concerned.
The conversion of frame to time depends on the sample rate. For example 44100 fps puts time 0.5 = frame location 22050. The raw data will probably have multiple bytes per frame. 16 bit encoding is of course 2 bytes, and stereo 16-bit is 4 bytes per frame.
Often it becomes worthwhile to convert the byte data to and from signed PCM floats (ranging -1 to 1) before working on it.
For working with mp3 files, one has to first decompress the file, do your edits, then recompress back to mp3. A library for mp3 coding/decoding can be found at git hub at the following location: pdudits/soundlibs A lot will depend on the format of the sound file prior to its being compressed.
I don't know about the specifics of getting to PCM frames from these tools for .mp3 files. I recall having to tinker with the code in JORBIS to intercept the data prior to its being sent to playback output. I wouldn't be surprised if you would have to go through something similar to get JLayer working for you.

How can I split WAV into smaller WAV files?

I have an android application, which records WAV files. This files can be up to 3 minutes long, but I need to split them by 30 seconds (This smaller WAV's must be playable too). I don't really care if it is splitted on a silent moment or not. Is there any way to do it?
You can use AudioInputStream and its AudioFileFormat member (which contains an AudioFormat instance) to know what to write (format, sample rate), you can use AudioSystem to write it.
Based on the sample rate of the format you can find out how many bytes of audio are 30 seconds, and go on a loop of reading that many bytes from the AudioInputStream, writing them to a new file.

compressing raw data returned by Android AudioRecord.read(byte[], int, int)

I am trying to read the audio data using MIC, and I am able to successfully read and save it to a file (wav format) returned by the AudioRecord class.
Now the real problem is that the file I am creating is too big. say Audio with duration of 5 minutes is taking upto 25MB.
Can anyone suggest me how to reduce the size. I am open to other file formats as well.
Thanks in advance.
WAV is expensive. Try MP3. See the code - https://github.com/yhirano/Mp3VoiceRecorderSampleForAndroid
Mp3 Encoder: http://www.tritonus.org/plugins.html
Try using Android's MediaMuxer. It is only supported on API 18+

Real time wav playing

I wrote a code that plays a .wav file. It works fine.
Now another piece of code gets music data from a audio receiver and keeps appending to that .wav file.
Suppose the audio is of length 5 seconds when i run the player, now in spite of the updating the wav file using the updater code, the player just plays those initial 5 seconds.
The playing code is simple :
try{
AudioInputStream audioInputStream = AudioSystem.getAudioInputStream(new File("junk.wav"));
Clip clip = AudioSystem.getClip();
clip.open(audioInputStream);
clip.start();
}catch(Exception ex){
System.out.println("Error with playing sound.");
ex.printStackTrace();
}
How can I play audio stream just after the input enters microphone jack (some lag permitted)?
You didn't post the question, so answer will be general.
The wav file has a strictly defined format. It contains a header and data (sound samples). The header defines the number of data in the wav file. To play audio the header provides additional information like sample rate. If you open the wav file with AudioInputStream it parses those information. Due to data length defined in header you can't appending the data to wav file. You could modify the wav file data samples but you must be sure the exchanged data has the same format.
When you open the docu for Class AudioInputStream the first statement is:
"An audio input stream is an input stream with a specified audio format and length."
From OS perspective.
Using a file as a buffer in real time player may be a problem. The filesystem is buffered/cached on many levels to provide a fast access to big chunks of memory. The reading/writing a file in the fly may cause even the file corruption. If I understand you would like to make a Circular buffer in the WAV file (overwrite again and again the same samples). You will find additional problems to synchronize the new content of the file (provided with writer) with the Clip which plays it in the loop.
What can you do?
You could use a SourceDataLine / TargetDataLine. Next read the samples in the fly and keep this inside byte buffer (e.g. byte[] or ByteBuffer) instead of file. You must first fill the buffer with incoming data, and later make a read/write in the loop from/to xxxDataLine. You must be aware that line are opened for specific AudioFormat. Use the same format for input and output. Not all formats are supported (it depends on hardware so this is a 'gentle' stuff with Java). Be aware that sometimes data size even if given in bytes must be adjusted to frame size (16bit per sample = 2 bytes).
See "Capturing Audio" in the Java Tutorial's Sound Trail
To understand this section, you will have to read the preceding sections, too. It is not an easy read. But basically, the TargetDataLine, as mentioned by flyer (+1) is key.
I suspect if you append to a .wav while reading from it, you will get a concurrency error.
If you just want to input sound from a mike, and you correctly set up TargetDataLine, you should be able to get pretty low latency.

Categories