I'm writing an application that records the screen and audio. While the screen recording works perfectly, I'm having difficulty in getting the raw audio using the JDK libraries. Here's the code:
try {
// Now, we're going to loop
long startTime = System.nanoTime();
System.out.println("Encoding Image.....");
while (!Thread.currentThread().isInterrupted()) {
// take the screen shot
BufferedImage screen = robot.createScreenCapture(screenBounds);
// convert to the right image type
BufferedImage bgrScreen = convertToType(screen,
BufferedImage.TYPE_3BYTE_BGR);
// encode the image
writer.encodeVideo(0, bgrScreen, System.nanoTime()
- startTime, TimeUnit.NANOSECONDS);
/* Need to get audio here and then encode using xuggler. Something like
WaveData wd = new WaveData();
TargetDataLine line;
AudioInputStream aus = new AudioInputStream(line);
short[] samples = getSourceSamples();
writer.encodeAudio(0, samples); */
if (timeCreation < 10) {
timeCreation = getGMTTime();
}
// sleep for framerate milliseconds
try {
Thread.sleep((long) (1000 / FRAME_RATE.getDouble()));
} catch (Exception ex) {
System.err.println("stopping....");
break;
}
}
// Finally we tell the writer to close and write the trailer if
// needed
} finally {
writer.close();
}
This page has some pseudo code like
while(haveMoreAudio())
{
short[] samples = getSourceSamples();
writer.encodeAudio(0, samples);
}
but what exactly should I do for getSourceSamples()?
Also, a bonus question - is it possible to choose from multiple microphones in this approach?
See also:
Xuggler encoding and muxing
Try this:
// Pick a format. Need 16 bits, the rest can be set to anything
// It is better to enumerate the formats that the system supports, because getLine() can error out with any particular format
AudioFormat audioFormat = new AudioFormat(44100.0F, 16, 2, true, false);
// Get default TargetDataLine with that format
DataLine.Info dataLineInfo = new DataLine.Info( TargetDataLine.class, audioFormat );
TargetDataLine line = (TargetDataLine) AudioSystem.getLine(dataLineInfo);
// Open and start capturing audio
line.open(audioFormat, line.getBufferSize());
line.start();
while (true) {
// read as raw bytes
byte[] audioBytes = new byte[ line.getBufferSize() / 2 ]; // best size?
int numBytesRead = 0;
numBytesRead = line.read(audioBytes, 0, audioBytes.length);
// convert to signed shorts representing samples
int numSamplesRead = numBytesRead / 2;
short[] audioSamples = new short[ numSamplesRead ];
if (format.isBigEndian()) {
for (int i = 0; i < numSamplesRead; i++) {
audioSamples[i] = (short)((audioBytes[2*i] << 8) | audioBytes[2*i + 1]);
}
}
else {
for (int i = 0; i < numSamplesRead; i++) {
audioSamples[i] = (short)((audioBytes[2*i + 1] << 8) | audioBytes[2*i]);
}
}
// use audioSamples in Xuggler etc
}
To pick a microphone, you'd probably have to do this:
Mixer.Info[] mixerInfo = AudioSystem.getMixerInfo();
// Look through and select a mixer here, different mixers should be different inputs
int selectedMixerIndex = 0;
Mixer mixer = AudioSystem.getMixer(mixerInfo[ selectedMixerIndex ]);
TargetDataLine line = (TargetDataLine) mixer.getLine(dataLineInfo);
I think it's possible that multiple microphones will show up in one mixer as different source data lines. In that case you'd have to open them and call dataLine.getControl(FloatControl.Type.MASTER_GAIN).setValue( volume ); to turn them on and off.
See:
WaveData.java
Sound wave from TargetDataLine
How to set volume of a SourceDataLine in Java
Related
I'm currently working on an application that plays back sound. I implemented playback for standard WAV File with the Java Sound API, no problems there, everything working fine. Now I want to add support for MP3 as well, but I'm having a strange problem: the playback gets distorted. I'm trying to figure out what I'm doing wrong, I would appreciate any leads in the right direction.
I'm using the Mp3SPI (http://www.javazoom.net/mp3spi/documents.html) for playing back the Mp3 Files.
I have already tried to take a look at the output and recorded a wav-file with the output I get from the mp3, then I compared the waveforms of the original and the recorded file. As it turns out, in the recorded file there are a lot of samples that are 0, or very close to it. Longer tones get broken up and the waveform returns to 0 all the time, then jumping back to the place the waveform is in the original.
I open the file like this:
private AudioInputStream mp3;
private AudioInputStream rawMp3;
private void openMP3(File file) {
// open the Audio INput Stream
try {
rawMp3 = AudioSystem.getAudioInputStream(file);
AudioFormat baseFormat = rawMp3.getFormat();
AudioFormat decodedFormat = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED,
baseFormat.getSampleRate(),
16,
baseFormat.getChannels(),
baseFormat.getChannels() * 2,
baseFormat.getSampleRate(),
false);
mp3 = AudioSystem.getAudioInputStream(decodedFormat, rawMp3);
} catch (UnsupportedAudioFileException | IOException ex) {
Logger.getLogger(SoundFile.class.getName()).log(Level.SEVERE, null, ex);
}
}
The part where I read the Mp3 File:
byte[] data = new byte[length];
// read the data into the buffer
int nBytesRead = 0;
while (nBytesRead != - 1 && nBytesRead < length) {
nBytesRead = mp3.read(data, 0, data.length - nBytesRead);
}
Also I convert the byte-array to doubles, perhaps I do something wrong here (I'm fairly new to using bitwise operators, so maybe there is the problem
double[][] frameBuffer = new double[2][1024]; // 2 channel stereo buffer
int nFramesRead = 0;
int byteIndex = 0;
// convert the data into double and write it to frameBuffer
for (int i = 0; i < length; ++i) {
for (int c = 0; c < 2; ++c) {
byte a = data[byteIndex++];
byte b = data[byteIndex++];
int val = a | b << 8; // a is the least significant byte. | functions as a + here. b << 8 moves 8 zeroes to the end of b.
frameBuffer[c][i] = (double) val / (double) Short.MAX_VALUE;
nFramesRead++;
}
}
The double-array is then later used to play back the sound. When playing a wav file, I do the exact same thing to the buffer, so I'm pretty sure it has to be something during the read process, not me sending faulty bytes to the ouput.
I would expect this to work out of the box with Mp3SPI, but somehow something breaks the audio along the way.
I am also open to trying other libraries to play back the MP3, if you have any recommendations. Just a Decoder for the raw MP3 Data would actually be enough.
As it turns out, the AudioFormat from the mp3 (input) and the AudioFormat of the output didnt match, obviously resulting in distortion. So with those matched up, playback is fine!
I'm trying to capture the sound of the PC. I have managed to capture the sound that enters the microphone through TargetDataLine, but I cannot find the way to capture the sound that comes out of the speakers.
I've been watching the mixer but I have not managed to capture the sound. I would like to know if someone has done it and if you can give me some clue as to where to start.
Although, your question is not really according to the "rules", here is a code snippet:
private byte[] record() throws LineUnavailableException {
AudioFormat format = AudioUtil.getAudioFormat(audioConf);
DataLine.Info info = new DataLine.Info(TargetDataLine.class, format);
// Checks if system supports the data line
if (!AudioSystem.isLineSupported(info)) {
LOGGER.error("Line not supported");
System.exit(0);
}
microphone = (TargetDataLine) AudioSystem.getLine(info);
microphone.open(format);
microphone.start();
LOGGER.info("Listening, tap enter to stop ...");
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
int numBytesRead;
byte[] data = new byte[microphone.getBufferSize() / 5];
// Begin audio capture.
microphone.start();
// Here, stopped is a global boolean set by another thread.
while (!stopped) {
// Read the next chunk of data from the TargetDataLine.
numBytesRead = microphone.read(data, 0, data.length);
// Save this chunk of data.
byteArrayOutputStream.write(data, 0, numBytesRead);
}
return byteArrayOutputStream.toByteArray();
}
Get more info from here:
https://www.programcreek.com/java-api-examples/?class=javax.sound.sampled.TargetDataLine&method=read
I've created a byte array WebSocket that receives audio chunks in real time from the client's mic (navigator.getUserMedia). I'm already recording this stream to a WAV file in the server, after some time that the WebSocket stops to receive new byte arrays. The following code represents the current situation.
WebSocket
#OnMessage
public void message(byte[] b) throws IOException{
if(byteOutputStream == null) {
byteOutputStream = new ByteArrayOutputStream();
byteOutputStream.write(b);
} else {
byteOutputStream.write(b);
}
}
Thread that stores the WAV file
public void store(){
byte b[] = byteOutputStream.toByteArray();
try {
AudioFormat audioFormat = new AudioFormat(44100, 16, 1, true, true);
ByteArrayInputStream byteStream = new ByteArrayInputStream(b);
AudioInputStream audioStream = new AudioInputStream(byteStream, audioFormat, b.length);
DateTime date = new DateTime();
File file = new File("/tmp/"+date.getMillis()+ ".wav");
AudioSystem.write(audioStream, AudioFileFormat.Type.WAVE, file);
audioStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
But instead of record a WAV file, my goal with this WebSocket is to process audio in real time using YIN pitch detection algorithm implemented on TarsosDSP library. In other words, this is basically execute the PitchDetectorExample, but using the data from the WebSocket instead of the Default Audio Device (OS mic). The following code represents how PitchDetectorExample is currently initializing live audio processing using the mic line provided by the OS.
private void setNewMixer(Mixer mixer) throws LineUnavailableException, UnsupportedAudioFileException {
if(dispatcher!= null){
dispatcher.stop();
}
currentMixer = mixer;
float sampleRate = 44100;
int bufferSize = 1024;
int overlap = 0;
final AudioFormat format = new AudioFormat(sampleRate, 16, 1, true, true);
final DataLine.Info dataLineInfo = new DataLine.Info(TargetDataLine.class, format);
TargetDataLine line;
line = (TargetDataLine) mixer.getLine(dataLineInfo);
final int numberOfSamples = bufferSize;
line.open(format, numberOfSamples);
line.start();
final AudioInputStream stream = new AudioInputStream(line);
JVMAudioInputStream audioStream = new JVMAudioInputStream(stream);
// create a new dispatcher
dispatcher = new AudioDispatcher(audioStream, bufferSize, overlap);
// add a processor
dispatcher.addAudioProcessor(new PitchProcessor(algo, sampleRate, bufferSize, this));
new Thread(dispatcher,"Audio dispatching").start();
}
There is a way to deal with WebSocket data as a TargetDataLine, so it will be possible to hook it up with AudioDispatcher and PitchProcessor? Somehow, i need to send the byte arrays received from the WebSocket to the audio processing Thread.
Another ideas on how reach this objective are welcome. Thanks!
I'm not sure you need an audioDispatcher. If you know how the bytes are encoded (PCM, 16bits le mono?) then you can convert them to floating points real-time and feed them to the pitchdetector algorithm, in your websocket you can do something like this (and forget about the inputstreams and audiodispatcher):
int index;
byte[] buffer = new byte[2048];
float[] floatBuffer = new float[1024];
FastYin detector = new FastYin(44100,1024);
public void message(byte[] b){
for(int i = 0 ; i < b.length; i++){
buffer[index] = b[i];
index++
if(index==2048){
AudioFloatConverter converter = AudioFloatConverter.getConverter(new Format(16bits, little endian, mono,...));
//converts the byte buffer to float
converter.toFloatArray(buffer,floatBuffer);
float pitch = detector.getPitch(floatBuffer);
//here you have your pitch info that you can use
index = 0;
}
}
You do need to watch the number of bytes that have passed: since two bytes represent one float (if 16bits pcm encoding is used) you need to start on even bytes. The endianness and samplerate are also important.
Regards
Joren
Im trying to combine a list of pictures to an mp4 movie with adding an mp3 file.
The length of the movie the user can choose either the length of the mp3 file or choose it manual.
And if the user chooses manual (length!=mp3 file length) the mp3 file should be cut or looped.
No it works with the pictures but without sound :(
private void convertImageToVideo() {
IMediaWriter writer = ToolFactory.makeWriter(outputFilename);
long delay = videotime / PicPathList.size();
long milliseconds = 0;
//adds Pictures to the mp4 stream
for (int i = 0; i < PicPathList.size(); i++) {
BufferedImage bi;
try {
bi = ImageIO.read(new File(PicPathList.get(i)));
bi = Tools.prepareForEncoding(bi);
int width=bi.getWidth();
int height=bi.getHeight();
if(width%2==1){
width++;
}
if(height%2==1){
height++;
}
if (i == 0) {
writer.addVideoStream(0, 0, ID.CODEC_ID_H264, width, height);
}
//debug
// System.out.println(PicPathList.get(i) + ", bi:" + bi.getWidth() + "x"
// + bi.getHeight() + ", ms:" + milliseconds);
writer.encodeVideo(0, bi, milliseconds, TimeUnit.MILLISECONDS);
milliseconds += delay;
} catch (IOException e) {
e.printStackTrace();
System.out.println("Error");
}
}
writer.close();
//at this part Im trying to combine the further generated mp4 file with the mp3 file
String inputVideoFilePath = outputFilename;
String inputAudioFilePath = this.musicFile.getAbsolutePath();
String outputVideoFilePath = "outputFilename";
IMediaWriter mWriter = ToolFactory.makeWriter(outputVideoFilePath);
IContainer containerVideo = IContainer.make();
IContainer containerAudio = IContainer.make();
// check files are readable
containerVideo.open(inputVideoFilePath, IContainer.Type.READ, null);
containerAudio.open(inputAudioFilePath, IContainer.Type.READ, null);
// read video file and create stream
IStreamCoder coderVideo = containerVideo.getStream(0).getStreamCoder();
IPacket packetvideo = IPacket.make();
int width = coderVideo.getWidth();
int height = coderVideo.getHeight();
// read audio file and create stream
IStreamCoder coderAudio = containerAudio.getStream(0).getStreamCoder();
IPacket packetaudio = IPacket.make();
mWriter.addAudioStream(1, 0,coderAudio.getCodecID(), coderAudio.getChannels(), coderAudio.getSampleRate());
mWriter.addVideoStream(0, 0, width, height);
while (containerVideo.readNextPacket(packetvideo) >= 0) {
containerAudio.readNextPacket(packetaudio);
// video packet
IVideoPicture picture = IVideoPicture.make(coderVideo.getPixelType(), width, height);
coderVideo.decodeVideo(picture, packetvideo, 0);
if (picture.isComplete())
mWriter.encodeVideo(0, picture);
// audio packet
IAudioSamples samples = IAudioSamples.make(512, coderAudio.getChannels(), IAudioSamples.Format.FMT_S32);
coderAudio.decodeAudio(samples, packetaudio, 0);
if (samples.isComplete())
mWriter.encodeAudio(1, samples);
}
coderAudio.close();
coderVideo.close();
containerAudio.close();
containerVideo.close();
mWriter.close();
}
I answered this question here which it is a complete answer.
JAVA - Xuggler - Play video while combining an MP3 audio file and a MP4 movie
You may use another jar file to merge your video and audio. Please notice this is not the right way to do it, but I didn't have any choice and time to dig into Xuggler codes.
I hope it works for you, too.
package MP4;
/**
*
* #author Pasban
*/
import com.coremedia.iso.boxes.Container;
import com.googlecode.mp4parser.authoring.Movie;
import com.googlecode.mp4parser.authoring.Track;
import com.googlecode.mp4parser.authoring.builder.DefaultMp4Builder;
import com.googlecode.mp4parser.authoring.container.mp4.MovieCreator;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
public class MuxMp4 {
public static void merge(String audio, String video, String output) throws IOException {
Movie countVideo = MovieCreator.build(video);
Movie countAudioEnglish = MovieCreator.build(audio);
Track audioTrackEnglish = countAudioEnglish.getTracks().get(0);
audioTrackEnglish.getTrackMetaData().setLanguage("eng");
countVideo.addTrack(audioTrackEnglish);
Container out = new DefaultMp4Builder().build(countVideo);
FileOutputStream fos = new FileOutputStream(new File(output));
out.writeContainer(fos.getChannel());
fos.close();
}
}
Check the MP4Parser sample codes for more information.
It is nice to mention that both of your files should be mp4. So you need to convert your mp3 to mp4 as well. and your video should not contain any sound, which in your case it does not.
AS I mentioned earlier, this in not the right way to do the job done.
I'm trying to generate sound with Java. In the end, I'm willing to continuously send sound to the sound card, but for now I would be able to send a unique sound wave.
So, I filled an array with 44100 signed integers representing a simple sine wave, and I would like to send it to my sound card, but I just can't get it to work.
int samples = 44100; // 44100 samples/s
int[] data = new int[samples];
// Generate all samples
for ( int i=0; i<samples; ++i )
{
data[i] = (int) (Math.sin((double)i/(double)samples*2*Math.PI)*(Integer.MAX_VALUE/2));
}
And I send it to a sound line using:
AudioFormat format = new AudioFormat(Encoding.PCM_SIGNED, 44100, 16, 1, 1, 44100, false);
Clip clip = AudioSystem.getClip();
AudioInputStream inputStream = new AudioInputStream(ais,format,44100);
clip.open(inputStream);
clip.start();
My problem resides between these to code snippets. I just can't find a way to convert my int[] to an input stream!
Firstly I think you want short samples rather than int:
short[] data = new short[samples];
because your AudioFormat specifies 16-bit samples. short is 16-bits wide but int is 32 bits.
An easy way to convert it to a stream is:
Allocate a ByteBuffer
Populate it using putShort calls
Wrap the resulting byte[] in a ByteArrayInputStream
Create an AudioInputStream from the ByteArrayInputStream and format
Example:
float frameRate = 44100f; // 44100 samples/s
int channels = 2;
double duration = 1.0;
int sampleBytes = Short.SIZE / 8;
int frameBytes = sampleBytes * channels;
AudioFormat format =
new AudioFormat(Encoding.PCM_SIGNED,
frameRate,
Short.SIZE,
channels,
frameBytes,
frameRate,
true);
int nFrames = (int) Math.ceil(frameRate * duration);
int nSamples = nFrames * channels;
int nBytes = nSamples * sampleBytes;
ByteBuffer data = ByteBuffer.allocate(nBytes);
double freq = 440.0;
// Generate all samples
for ( int i=0; i<nFrames; ++i )
{
double value = Math.sin((double)i/(double)frameRate*freq*2*Math.PI)*(Short.MAX_VALUE);
for (int c=0; c<channels; ++ c) {
int index = (i*channels+c)*sampleBytes;
data.putShort(index, (short) value);
}
}
AudioInputStream stream =
new AudioInputStream(new ByteArrayInputStream(data.array()), format, nFrames*2);
Clip clip = AudioSystem.getClip();
clip.open(stream);
clip.start();
clip.drain();
Note: I changed your AudioFormat to stereo, because it threw an exception when I requested a mono line. I also increased the frequency of your waveform to something in the audible range.
Update - the previous modification (writing directly to the data line) was not necessary - using a Clip works fine. I have also introduced some variables to make the calculations clearer.
If you want to play a simple Sound, you should use a SourceDataLine.
Here's an example:
import javax.sound.sampled.*;
public class Sound implements Runnable {
//Specify the Format as
//44100 samples per second (sample rate)
//16-bit samples,
//Mono sound,
//Signed values,
//Big-Endian byte order
final AudioFormat format=new AudioFormat(44100f,16,2,true,true);
//Your output line that sends the audio to the speakers
SourceDataLine line;
public Sound(){
try{
line=AudioSystem.getSourceDataLine(format);
line.open(format);
}catch(LineUnavailableExcecption oops){
oops.printStackTrace();
}
new Thread(this).start();
}
public void run(){
//a buffer to store the audio samples
byte[] buffer=new byte[1000];
int bufferposition=0;
//a counter to generate the samples
long c=0;
//The pitch of your sine wave (440.0 Hz in this case)
double wavelength=44100.0/440.0;
while(true){
//Generate a sample
short sample=(short) (Math.sin(2*Math.PI*c/wavelength)*32000);
//Split the sample into two bytes and store them in the buffer
buffer[bufferposition]=(byte) (sample>>>8);
bufferposition++;
buffer[bufferposition]=(byte) (sample & 0xff);
bufferposition++;
//if the buffer is full, send it to the speakers
if(bufferposition>=buffer.length){
line.write(buffer,0,buffer.length);
line.start();
//Reset the buffer
bufferposition=0;
}
}
//Increment the counter
c++;
}
public static void main(String[] args){
new Sound();
}
}
In this example you're continuosly generating a sine wave, but you can use this code to play sound from any source you want. You just have to make sure that you format the samples right. In this case, I'm using raw, uncompressed 16-bit samples at a sample rate of 44100 Hz. However, if you want to play audio from a file, you can use a Clip object
public void play(File file){
Clip clip=AudioSystem.getClip();
clip.open(AudioSystem.getAudioInputStream(file));
clip.loop(1);
}