Java ogg vorbis encoding - java

i am using tritonous package for audio encoding in ogg-vorbis. I face a problem when i am giving the audio format.
Unsupported conversion: VORBIS 44100.0Hz, unknown bits per sample, mono, unknown frame size, from PCM_SIGNED 44100.0 Hz, 16 bit, mono, 2 bytes/frame, little-endian
This is my code where i am specifying the format
File outputFile = new File(userDir+"//San"+"_"+strFilename + ".spx");
// Using PCM 44.1 kHz, 16 bit signed,stereo.
if(osName.indexOf("win") >= 0){
System.out.println("windows");
audioFormat = getWindowsAudioFormat();
sampleRate = 44100.0F;
}else {
System.out.println("mac");
audioFormat = getMacAudioFormat();
sampleRate = 44100.0F;
}
AudioFormat vorbisFormat = new AudioFormat(VORBIS,
sampleRate,
AudioSystem.NOT_SPECIFIED,
1,
AudioSystem.NOT_SPECIFIED,
AudioSystem.NOT_SPECIFIED,
false);
DataLine.Info info = new DataLine.Info(TargetDataLine.class, audioFormat);
TargetDataLine targetDataLine = null;
AudioFileFormat.Type fileType = null;
File audioFile = null;
fileType = VORBIS;
try
{
targetDataLine = (TargetDataLine) AudioSystem.getLine(info);
targetDataLine.open(audioFormat);
}
catch (LineUnavailableException e)
{
System.out.println("unable to get a recording line");
e.printStackTrace();
System.exit(1);
}
AudioInputStream ais = new AudioInputStream(targetDataLine);
ais = AudioSystem.getAudioInputStream(vorbisFormat, ais);
final Recorder recorder = new Recorder(targetDataLine,ais,fileType,outputFile);
int number = 0;
System.out.println("Recording...");
recorder.start();

I wrote a utility class to encode OGG Vorbis audio files from Java, using the xiph Java ports of libogg and libvorbis.
https://github.com/xjmusic/java-vorbis-encoder/blob/master/VorbisEncoder.java

Related

Detect Specific Frequency From Microphone Java

I'm trying to capture audio that is coming from microphone and i wanted to check the frequency of sound. If I get a frequency greater then let's say : 1316.8 then I will start recording for 1 minute.
I am struggling with converting byte Data to Frequency.
I have used Javax.sound to capture audio that is coming from microphone and I have done the recording part as well.
AudioFormat format = new AudioFormat(44100, 16, 2, true, true);
DataLine.Info targetInfo = new DataLine.Info(TargetDataLine.class, format);
DataLine.Info sourceInfo = new DataLine.Info(SourceDataLine.class, format);
try {
TargetDataLine targetLine = (TargetDataLine) AudioSystem.getLine(targetInfo);
targetLine.open(format);
targetLine.start();
SourceDataLine sourceLine = (SourceDataLine) AudioSystem.getLine(sourceInfo);
sourceLine.open(format);
sourceLine.start();
int numBytesRead;
byte[] targetData = new byte[targetLine.getBufferSize() / 5];
I expect the output to be like Frequency of every sound that is coming from microphone.

Performance issues with converting mp3 file input stream to byte output stream

I would like to extract byte array from a given mp3 file in order to apply fast fourier transform on the latter. The performed FFT will give me some features for my pet-project musical -- recommendation system.
I have written the following code to extract the bytes from a given mp3 file:
public class TrackSample {
private static byte[] readBytesInPredefinedFormat(TargetDataLine format, InputStream inStream) throws IOException {
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
byte[] buffer = new byte[8192];
int bytesRead;
while ((bytesRead = inStream.read(buffer)) > 0) {
int count = format.read(buffer, 0, buffer.length);
if (count > 0) {
byteArrayOutputStream.write(buffer, 0, count);
}
byteArrayOutputStream.write(buffer, 0, bytesRead);
}
byte[] bytes = byteArrayOutputStream.toByteArray();
byteArrayOutputStream.close();
inStream.close();
return bytes;
}
public static byte[] getTrackBytes(String pathToTrackSample) throws IOException, LineUnavailableException {
FileInputStream fileInputStream = new FileInputStream(pathToTrackSample);
final AudioFormat format = CurrentAudioFormat.getAudioFormat(); //Fill AudioFormat with the wanted settings
DataLine.Info info = new DataLine.Info(TargetDataLine.class, format);
TargetDataLine line = (TargetDataLine) AudioSystem.getLine(info);
line.open(format);
line.start();
return readBytesInPredefinedFormat(line, fileInputStream);
}
}
And the specified audio format is
public class CurrentAudioFormat {
public static AudioFormat getAudioFormat(){
float sampleRate = 44100;
int sampleSizeInBits = 8;
int channels = 1; //mono
boolean signed = true;
boolean bigEndian = true;
return new AudioFormat(sampleRate, sampleSizeInBits, channels, signed, bigEndian);
}
}
I tried to test this code on the following mp3 file:
File type ID: MPG3
Num Tracks: 1
----
Data format: 2 ch, 44100 Hz, '.mp3' (0x00000000) 0 bits/channel, 0 bytes/packet, 1152 frames/packet, 0 bytes/frame
no channel layout.
estimated duration: 104.176325 sec
audio bytes: 4167053
audio packets: 3988
bit rate: 320000 bits per second
packet size upper bound: 1052
maximum packet size: 1045
audio data file offset: 3169
optimized
audio 4591692 valid frames + 576 priming + 1908 remainder = 4594176
The system characteristics are:
processor: Intel core i5, 1.4 GHz;
RAM: DDR3, 4Gb
OS: Mac OS X El Captain
It took roughly 5 minutes to extract the byte array from this mp3 file.
What are the possible bottlenecks and how can I improve them?
To read the bytes you just need
while ((bytesRead = inStream.read(buffer)) > -1) {
byteArrayOutputStream.write(buffer, 0, bytesRead);
}
I dont know why you are reading twice.
To make sure that what you got is right try to resave it to a new audio file.
--
The standard way to read the audio file is
AudioInputStream audioInputStream=null;
try {
audioInputStream=AudioSystem.getAudioInputStream(new File(file));
}
catch(UnsupportedAudioFileException auf) { auf.printStackTrace(); }
then you pass this audioInputStream to your reading method.

issue when opening wav file using (Java) AudioInputStream

I am using JDK7 and trying to run a wav file - I tried the following test but got the error copied below:
Error:
line with format ULAW 8000.0 Hz, 8 bit, mono, 1 bytes/frame, not supported.
Sample Code:
import javax.sound.sampled.*;
try {
Clip clip = AudioSystem.getClip();
AudioInputStream inputStream = AudioSystem.getAudioInputStream(
new File("C://Users//xyz//Desktop//centerClosed.wav"));
clip.open(inputStream);
clip.start();
} catch (Exception e) {
System.err.println(e.getMessage());
}
Any ideas on how I go about handling this case? Thanks in advance
Your wav file seems to be in ULAW format, sampled at 8kHz, a format the clip apparently does not understand.
Try converting the audio to 44.1kHz PCM like this:
import javax.sound.sampled.*;
try {
Clip clip = AudioSystem.getClip();
AudioInputStream ulawIn = AudioSystem.getAudioInputStream(
new File("C://Users//xyz//Desktop//centerClosed.wav"));
// define a target AudioFormat that is likely to be supported by your audio hardware,
// i.e. 44.1kHz sampling rate and 16 bit samples.
AudioInputStream pcmIn = AudioSystem.getAudioInputStream(
new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, 44100f, 16, 1, 2, 44100f, true)
ulawIn);
clip.open(pcmIn);
clip.start();
} catch (Exception e) {
System.err.println(e.getMessage());
}

How to get Audio for encoding using Xuggler

I'm writing an application that records the screen and audio. While the screen recording works perfectly, I'm having difficulty in getting the raw audio using the JDK libraries. Here's the code:
try {
// Now, we're going to loop
long startTime = System.nanoTime();
System.out.println("Encoding Image.....");
while (!Thread.currentThread().isInterrupted()) {
// take the screen shot
BufferedImage screen = robot.createScreenCapture(screenBounds);
// convert to the right image type
BufferedImage bgrScreen = convertToType(screen,
BufferedImage.TYPE_3BYTE_BGR);
// encode the image
writer.encodeVideo(0, bgrScreen, System.nanoTime()
- startTime, TimeUnit.NANOSECONDS);
/* Need to get audio here and then encode using xuggler. Something like
WaveData wd = new WaveData();
TargetDataLine line;
AudioInputStream aus = new AudioInputStream(line);
short[] samples = getSourceSamples();
writer.encodeAudio(0, samples); */
if (timeCreation < 10) {
timeCreation = getGMTTime();
}
// sleep for framerate milliseconds
try {
Thread.sleep((long) (1000 / FRAME_RATE.getDouble()));
} catch (Exception ex) {
System.err.println("stopping....");
break;
}
}
// Finally we tell the writer to close and write the trailer if
// needed
} finally {
writer.close();
}
This page has some pseudo code like
while(haveMoreAudio())
{
short[] samples = getSourceSamples();
writer.encodeAudio(0, samples);
}
but what exactly should I do for getSourceSamples()?
Also, a bonus question - is it possible to choose from multiple microphones in this approach?
See also:
Xuggler encoding and muxing
Try this:
// Pick a format. Need 16 bits, the rest can be set to anything
// It is better to enumerate the formats that the system supports, because getLine() can error out with any particular format
AudioFormat audioFormat = new AudioFormat(44100.0F, 16, 2, true, false);
// Get default TargetDataLine with that format
DataLine.Info dataLineInfo = new DataLine.Info( TargetDataLine.class, audioFormat );
TargetDataLine line = (TargetDataLine) AudioSystem.getLine(dataLineInfo);
// Open and start capturing audio
line.open(audioFormat, line.getBufferSize());
line.start();
while (true) {
// read as raw bytes
byte[] audioBytes = new byte[ line.getBufferSize() / 2 ]; // best size?
int numBytesRead = 0;
numBytesRead = line.read(audioBytes, 0, audioBytes.length);
// convert to signed shorts representing samples
int numSamplesRead = numBytesRead / 2;
short[] audioSamples = new short[ numSamplesRead ];
if (format.isBigEndian()) {
for (int i = 0; i < numSamplesRead; i++) {
audioSamples[i] = (short)((audioBytes[2*i] << 8) | audioBytes[2*i + 1]);
}
}
else {
for (int i = 0; i < numSamplesRead; i++) {
audioSamples[i] = (short)((audioBytes[2*i + 1] << 8) | audioBytes[2*i]);
}
}
// use audioSamples in Xuggler etc
}
To pick a microphone, you'd probably have to do this:
Mixer.Info[] mixerInfo = AudioSystem.getMixerInfo();
// Look through and select a mixer here, different mixers should be different inputs
int selectedMixerIndex = 0;
Mixer mixer = AudioSystem.getMixer(mixerInfo[ selectedMixerIndex ]);
TargetDataLine line = (TargetDataLine) mixer.getLine(dataLineInfo);
I think it's possible that multiple microphones will show up in one mixer as different source data lines. In that case you'd have to open them and call dataLine.getControl(FloatControl.Type.MASTER_GAIN).setValue( volume ); to turn them on and off.
See:
WaveData.java
Sound wave from TargetDataLine
How to set volume of a SourceDataLine in Java

How can I convert a wav file in java

How can I convert a wav file in java
AudioFormat targetFormat = new AudioFormat(
sourceFormat.getEncoding(),
fTargetFrameRate,
16,
sourceFormat.getChannels(),
sourceFormat.getFrameSize(),
fTargetFrameRate,
false);
in result Exception :
java.lang.IllegalArgumentException: Unsupported conversion:
ULAW 8000.0 Hz, **16 bit**, mono, 1 bytes/frame, **from** ULAW 8000.0 Hz, **8 bit**, mono, 1 bytes/frame
it is possible in java?
I need get wav file 16 bit, from 8
Here is a method that will convert an 8-bit uLaw encoded binary file into a 16-bit WAV file using built-in Java methods.
public static void convertULawFileToWav(String filename) {
File file = new File(filename);
if (!file.exists())
return;
try {
long fileSize = file.length();
int frameSize = 160;
long numFrames = fileSize / frameSize;
AudioFormat audioFormat = new AudioFormat(Encoding.ULAW, 8000, 8, 1, frameSize, 50, true);
AudioInputStream audioInputStream = new AudioInputStream(new FileInputStream(file), audioFormat, numFrames);
AudioSystem.write(audioInputStream, Type.WAVE, new File("C:\\file.wav"));
} catch (IOException e) {
e.printStackTrace();
}
}
Look at this one: Conversion of Audio Format it is similar to your issue suggesting looking at http://docs.oracle.com/javase/6/docs/api/javax/sound/sampled/AudioSystem.html
You can always use FFMPEG, http://ffmpeg.org/, to do the conversion. Your Java program can call FFMPEG to do the conversion.
FFMPEG works on all OS.

Categories