how to get the decibel of byte audio data - java

i have been working on a java program that captures microphone audio byte data and then sends it to somewhere else (a part of my program), is there anyway i can calculate the decibel value of the data?
i am using TargetDataLine, in each iteration i am saving data to a tempData holder which i take and write it into a ByteOutputStream, in each iteration i am trying to calculate the decibel of tempData.
keep in mind i don't really understand a lot of things related to sound in computers and in java in general so please forgive me for my lack of knowledge.
this is class 1 or "foo", it's handling when to stop the capturing
public class Foo {
public static void foo() {
AudioFormat format = new AudioFormat(8000.0f, 16, 1, true, true);
try (
var microphone = (TargetDataLine) AudioSystem.getLine(new DataLine.Info(TargetDataLine.class, format))
) {
var micListener = new MicListener(microphone);
ByteArrayOutputStream allData = new ByteArrayOutputStream();
byte[] tempData;
final int chunkSize = 1024;
while (true) {
// in this case the loop goes forever, but in my program it stops when the user stops capturing audio.
tempData = micListener.startRecording(chunkSize);
//calculate the decibel value of tempData; Utils.calculateDecibel(tempData)
//if decibel is high then do stuff
if (decibel > 50)
allData.write(tempData , 0 , micListener.getNumOfBytesRead());
}
} catch (LineUnavailableException e) {
e.printStackTrace();
}
}
}
this is class 2 or "MicListener", it's handeling capture of data
public class MicListener {
private final TargetDataLine target;
private byte[] audioData;
private int numOfBytesRead = 0;
public MicListener(TargetDataLine target){
this.target = target;
audioData = new byte[target.getBufferSize() / 5];
}
public byte[] startRecording(int chunkSize) throws LineUnavailableException {
numOfBytesRead = target.read(audioData , 0 , chunkSize);
return audioData;
}
public int getNumOfBytesRead() {
return numOfBytesRead;
}
}
thanks for the help! have a great day

Related

Java - Start Audio Playback at X Position

Edit: I am using a .wav file
I'm trying to figure out how to start audio at a certain position (for example: 10 seconds into audio file rather than at the start). Reading the documentation for SourceDataLine had me believe this may be achieved using the offset during:
line.write(byte[] b, int offset, int length)
but every time I've tried any value other than 0 (the default I believe), I get java.lang.IndexOutOfBoundsException, which maybe it hasn't read x byte position yet so cannot write x byte position? I'm unsure and left scratching my head.
I figured this would be a common enough request but can't seem to find anything online related to this, only pausing and resuming audio. I'm probably not searching properly.
In case it matters, here is how I'm currently doing my audio:
AudioInputStream stream = AudioSystem.getAudioInputStream("...file...");
AudioFormat format = stream.getFormat();
SourceDataLine.Info info = new DataLine.Info(SourceDataLine.class, format,((int)stream.getFrameLength()*format.getFrameSize()));
SourceDataLine line = (SourceDataLine)AudioSystem.getLine(info);
int bufferSize = line.getBufferSize();
byte inBuffer[] = new byte[bufferSize];
byte outBuffer[] = new byte[bufferSize];
int numRead, numWritten;
do {
numRead = audioStream.read(inBuffer, 0, bufferSize);
if(numRead <= 0) {
myAudio.flushStream();
} else {
myAudio.writeBytesToStream(inBuffer, numRead);
}
do {
numWritten = myAudio.readBytesFromStream(outBuffer, bufferSize);
if(numWritten > 0) {
line.write(outBuffer, 0, numWritten);
}
} while(numWritten > 0);
} while(numRead > 0);
The problem you are having probably stems from the fact that you are adjusting the offset without adjusting the length. If your array is 10 bytes long and you are starting reading 10 bytes from offset 5 instead of 0, you are reading 5 bytes past its end.
I'd recommend to first skip the appropriate number of bytes using skip(long) on the AudioInputStream and then write to the line.
AudioInputStream stream = AudioSystem.getAudioInputStream("...file...");
AudioFormat format = stream.getFormat();
// find out how many bytes you have to skip, this depends on bytes per frame (a.k.a. frameSize)
int secondsToSkip = 10;
long bytesToSkip = format.getFrameSize() * ((int)format.getFrameRate()) * secondsToSkip;
// now skip until the correct number of bytes have been skipped
int justSkipped = 0;
while (bytesToSkip > 0 && (justSkipped = stream.skip(bytesToSkip)) > 0) {
bytesToSkip -= justSkipped;
}
// then proceed with writing to your line like you have done before
[...]
Note that this only works, if the audio file is uncompressed. If you are dealing with something like .mp3, you first have to convert the stream to PCM (see https://stackoverflow.com/a/41850901/942774)
I've created an example which compiles and works. You can play a .wav file from any time point. It should also work for an mp3 file, but I haven't tested that. Invoke mp3ToWav() for that.
import javax.sound.sampled.*;
import java.io.File;
import java.io.IOException;
public class PlayWavAtTimePoint {
public static void main(String[] args) throws Exception {
String fileName = args[0];
int secondsToSkip = (Integer.parseInt(args[1]));
PlayWavAtTimePoint program = new PlayWavAtTimePoint();
AudioInputStream is = program.getAudioInputStream(fileName);
program.skipFromBeginning(is, secondsToSkip);
program.playSound(is);
}
private static void skipFromBeginning(AudioInputStream audioStream, int secondsToSkip) throws UnsupportedAudioFileException, IOException, LineUnavailableException {
AudioFormat format = audioStream.getFormat();
// find out how many bytes you have to skip, this depends on bytes per frame (a.k.a. frameSize)
long bytesToSkip = format.getFrameSize() * ((int)format.getFrameRate()) * secondsToSkip;
// now skip until the correct number of bytes have been skipped
long justSkipped = 0;
while (bytesToSkip > 0 && (justSkipped = audioStream.skip(bytesToSkip)) > 0) {
bytesToSkip -= justSkipped;
}
}
private static final int BUFFER_SIZE = 128000;
/**
* #param filename the name of the file that is going to be played
*/
public void playSound(String filename) throws IOException, UnsupportedAudioFileException, LineUnavailableException {
AudioInputStream audioStream = getAudioInputStream(filename);
playSound(audioStream);
}
private AudioInputStream getAudioInputStream(String filename) throws UnsupportedAudioFileException, IOException {
return AudioSystem.getAudioInputStream(new File(filename));
}
public void playSound(AudioInputStream audioStream) throws LineUnavailableException, IOException {
AudioFormat audioFormat = audioStream.getFormat();
DataLine.Info info = new DataLine.Info(SourceDataLine.class, audioFormat);
SourceDataLine audioOutput = (SourceDataLine) AudioSystem.getLine(info);
audioOutput.open(audioFormat);
audioOutput.start();
//This seems to be reading the whole file into a buffer before playing ... not efficient.
//Why not stream it?
int nBytesRead = 0;
byte[] abData = new byte[BUFFER_SIZE];
while (nBytesRead != -1) {
nBytesRead = audioStream.read(abData, 0, abData.length);
if (nBytesRead >= 0) {
audioOutput.write(abData, 0, nBytesRead);
}
}
audioOutput.drain();
audioOutput.close();
}
/**
* Invoke this function to convert to a playable file.
*/
public static void mp3ToWav(File mp3Data) throws UnsupportedAudioFileException, IOException {
// open stream
AudioInputStream mp3Stream = AudioSystem.getAudioInputStream(mp3Data);
AudioFormat sourceFormat = mp3Stream.getFormat();
// create audio format object for the desired stream/audio format
// this is *not* the same as the file format (wav)
AudioFormat convertFormat = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED,
sourceFormat.getSampleRate(), 16,
sourceFormat.getChannels(),
sourceFormat.getChannels() * 2,
sourceFormat.getSampleRate(),
false);
// create stream that delivers the desired format
AudioInputStream converted = AudioSystem.getAudioInputStream(convertFormat, mp3Stream);
// write stream into a file with file format wav
AudioSystem.write(converted, AudioFileFormat.Type.WAVE, new File("/tmp/out.wav"));
}
}

How can I write a WAV file from byte array in Java

I'd like to develop a simple java music player to accelerate and play music using this Sonic Algorithm github/Sonic.java. And here's the main class: github/Main.java. The Main.java simply calls Sonic.java and then it can play the music. Even though it works well when running a WAV file, but what I want is to write a new WAV file from the accelerated input stream.
I've tried to write bytes to a ByteArrayOutputStream in the do-while loop of Main.java, and transformed them into a local WAV file, while the generated music gets cut off and obviously there have some lost data during this process.
public class App {
private static void runSonic(
AudioInputStream audioStream,
SourceDataLine line,
float speed,
float pitch,
float rate,
float volume,
boolean emulateChordPitch,
int quality,
int sampleRate,
int numChannels) throws IOException
{
Sonic sonic = new Sonic(sampleRate, numChannels);
int bufferSize = line.getBufferSize();
byte inBuffer[] = new byte[bufferSize];
byte outBuffer[] = new byte[bufferSize];
int numRead,numWritten;
AudioFormat af = audioStream.getFormat();
ByteArrayOutputStream output = new ByteArrayOutputStream();
sonic.setSpeed(speed);
sonic.setPitch(pitch);
sonic.setRate(rate);
sonic.setVolume(volume);
sonic.setChordPitch(emulateChordPitch);
sonic.setQuality(quality);
int count = 0;
do {
numRead = audioStream.read(inBuffer, 0, bufferSize);
if(numRead <= 0) {
sonic.flushStream();
} else {
sonic.writeBytesToStream(inBuffer, numRead);
}
do {
numWritten = sonic.readBytesFromStream(outBuffer, bufferSize);
if(numWritten > 0) {
line.write(outBuffer, 0, numWritten);
output.write(outBuffer);
}
} while(numWritten > 0);
} while(numRead > 0);
byte fileBuffer[] = output.toByteArray();
ByteArrayInputStream bais1 = new ByteArrayInputStream(fileBuffer);
AudioInputStream aisAccelerated1 =
new AudioInputStream(bais1, af, fileBuffer.length);
try {
AudioSystem.write(aisAccelerated1, AudioFileFormat.Type.WAVE, new
File("newFile.wav")
);
}
catch(Exception e) {
e.printStackTrace();
}
}
public static void main(
String[] argv) throws UnsupportedAudioFileException, IOException, LineUnavailableException
{
float speed = 1.5f;
float pitch = 1.5f;
float rate = 1.0f;
float volume = 1.0f;
boolean emulateChordPitch = false;
int quality = 0;
String fileName = "file.wav";
AudioInputStream stream = AudioSystem.getAudioInputStream(new File(fileName));
AudioFormat format = stream.getFormat();
int sampleRate = (int)format.getSampleRate();
int numChannels = format.getChannels();
SourceDataLine.Info info = new DataLine.Info(SourceDataLine.class, format,
((int)stream.getFrameLength()*format.getFrameSize()));
SourceDataLine line = (SourceDataLine)AudioSystem.getLine(info);
line.open(stream.getFormat());
line.start();
runSonic(stream, line, speed, pitch, rate, volume, emulateChordPitch, quality,
sampleRate, numChannels);
line.drain();
line.stop();
}
}
Who can tell me what's going on here? I think all bytes stored in outBuffer has been writted into the output stream in this way.
You can find the whole class using the links above.
output.write(outBuffer);
The problem is here. It should be
output.write(outBuffer, 0, numWritten);
You are writing garbage to the output.

Sound class sounds layered and screechy on Windows

So, when I'm on Mac, this error did not occur. However, when I am on Windows, any sounds I play multiple times over each other start sounding like they are becoming screechy and layering over each other in an unpleasant way.
Here is relevant code from my Sound class:
public class NewerSound {
private boolean stop = true;
private boolean loopable;
private boolean isUrl;
private URL fileUrl;
private Thread sound;
private double volume = 1.0;
public NewerSound(URL url, boolean loopable) throws UnsupportedAudioFileException, IOException {
isUrl = true;
fileUrl = url;
this.loopable = loopable;
}
public void play() {
stop = false;
Runnable r = new Runnable() {
#Override
public void run() {
do {
try {
AudioInputStream in;
if(!isUrl)
in = getAudioInputStream(new File(fileName));
else
in = getAudioInputStream(fileUrl);
final AudioFormat outFormat = getOutFormat(in.getFormat());
final Info info = new Info(SourceDataLine.class, outFormat);
try(final SourceDataLine line = (SourceDataLine) AudioSystem.getLine(info)) {
if(line != null) {
line.open(outFormat);
line.start();
AudioInputStream inputMystream = AudioSystem.getAudioInputStream(outFormat, in);
stream(inputMystream, line);
line.drain();
line.stop();
}
}
}
catch(UnsupportedAudioFileException | LineUnavailableException | IOException e) {
throw new IllegalStateException(e);
}
} while(loopable && !stop);
}
};
sound = new Thread(r);
sound.start();
}
private AudioFormat getOutFormat(AudioFormat inFormat) {
final int ch = inFormat.getChannels();
final float rate = inFormat.getSampleRate();
return new AudioFormat(PCM_SIGNED, rate, 16, ch, ch * 2, rate, false);
}
private void stream(AudioInputStream in, SourceDataLine line) throws IOException {
byte[] buffer = new byte[4];
for(int n = 0; n != -1 && !stop; n = in.read(buffer, 0, buffer.length)) {
byte[] bufferTemp = new byte[buffer.length];
for(int i = 0; i < bufferTemp.length; i += 2) {
short audioSample = (short) ((short) ((buffer[i + 1] & 0xff) << 8) | (buffer[i] & 0xff));
audioSample = (short) (audioSample * volume);
bufferTemp[i] = (byte) audioSample;
bufferTemp[i + 1] = (byte) (audioSample >> 8);
}
buffer = bufferTemp;
line.write(buffer, 0, n);
}
}
}
It is possible that it could be an issue of accessing the same resources when playing the same sound multiple times over itself when I use the NewerSound.play() method.
Please let me know if any other details are needed. Much appreciated :)
The method you are using to change the volume in the method "stream" is flawed. you have 16-bit encoding, thus it takes two bytes to derive a single audio value. You need to assemble the value from the two byte pairs before the multiplication, then take apart the 16-bit result back into two bytes. There are a number of StackOverflow threads with code to do this.
I don't know if this is the whole reason for the problem you describe but it definitely could be, and definitely needs to be fixed.

java DSP synth strange behaviour

I am trying to play a signal saved on a byte array, using javax.sound.sampled.SourceDataLine.
I am trying for a start to play a simple sine wave.
For some frequencies (for instances 1000Hz, 400Hz) it works well, but for others (1001, 440)
I am only getting an almost pitchless buzz.
The sampling rate is definitly high enough to prevent aliasing (16Khz).
Any ideas ?
Cheers.
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.SourceDataLine;
public class Player
{
private static float SAMPLE_RATE = 16000;
public static void main(String[] args)
{
playSound();
}
private static void playSound()
{
try
{
final AudioFormat audioFormat = new AudioFormat( SAMPLE_RATE, 8, 1, true, true );
SourceDataLine line = AudioSystem.getSourceDataLine( audioFormat );
line.open( audioFormat );
line.start();
/* the last argument here is the frequency in Hz. works well with 1000, but with 1001 I get week and pitchless clicking sound sound*/
byte[] signal = smpSin( 1, 1, 1000 );
play( line, signal );
line.drain();
line.close();
}
catch (Exception e)
{
e.printStackTrace();
}
}
private static byte[] smpSin(double lenInSec, double amp, double signalFreq)
{
int len = (int)(SAMPLE_RATE * lenInSec);
byte[] out = new byte[len];
for (int i = 0; i < out.length; i++)
{
out[i] = (byte)(amp * Math.sin( ((2.0 * Math.PI * signalFreq) * ((double)i)) / SAMPLE_RATE ));
}
return out;
}
private static void play(SourceDataLine line, byte[] array)
{
line.write( array, 0, array.length );
}
}
You aren't saving the phase of the sinewave between buffer calls. Thus any phase discontinuity will cause a buzz at the rate play() is called. Frequencies where there is no buzz just happen to end at your default beginning phase.

Audio Mixing with Java (without Mixer API)

I am attempting to mix several different audio streams and trying to get them to play at the same time instead of one-at-a-time.
The code below plays them one-at-a-time and I cannot figure out a solution that does not use the Java Mixer API. Unfortunately, my audio card does not support synchronization using the Mixer API and I am forced to figure out a way to do it through code.
Please advise.
/////CODE IS BELOW////
class MixerProgram {
public static AudioFormat monoFormat;
private JFileChooser fileChooser = new JFileChooser();
private static File[] files;
private int trackCount;
private FileInputStream[] fileStreams = new FileInputStream[trackCount];
public static AudioInputStream[] audioInputStream;
private Thread trackThread[] = new Thread[trackCount];
private static DataLine.Info sourceDataLineInfo = null;
private static SourceDataLine[] sourceLine;
public MixerProgram(String[] s)
{
trackCount = s.length;
sourceLine = new SourceDataLine[trackCount];
audioInputStream = new AudioInputStream[trackCount];
files = new File[s.length];
}
public static void getFiles(String[] s)
{
files = new File[s.length];
for(int i=0; i<s.length;i++)
{
File f = new File(s[i]);
if (!f.exists())
System.err.println("Wave file not found: " + filename);
files[i] = f;
}
}
public static void loadAudioFiles(String[] s)
{
AudioInputStream in = null;
audioInputStream = new AudioInputStream[s.length];
sourceLine = new SourceDataLine[s.length];
for(int i=0;i<s.length;i++){
try
{
in = AudioSystem.getAudioInputStream(files[i]);
}
catch(Exception e)
{
System.err.println("Failed to assign audioInputStream");
}
monoFormat = in.getFormat();
AudioFormat decodedFormat = new AudioFormat(
AudioFormat.Encoding.PCM_SIGNED,
monoFormat.getSampleRate(), 16, monoFormat.getChannels(),
monoFormat.getChannels() * 2, monoFormat.getSampleRate(),
false);
monoFormat = decodedFormat; //give back name
audioInputStream[i] = AudioSystem.getAudioInputStream(decodedFormat, in);
sourceDataLineInfo = new DataLine.Info(SourceDataLine.class, monoFormat);
try
{
sourceLine[i] = (SourceDataLine) AudioSystem.getLine(sourceDataLineInfo);
sourceLine[i].open(monoFormat);
}
catch(LineUnavailableException e)
{
System.err.println("Failed to get SourceDataLine" + e);
}
}
}
public static void playAudioMix(String[] s)
{
final int tracks = s.length;
System.out.println(tracks);
Runnable playAudioMixRunner = new Runnable()
{
int bufferSize = (int) monoFormat.getSampleRate() * monoFormat.getFrameSize();
byte[] buffer = new byte[bufferSize];
public void run()
{
if(tracks==0)
return;
for(int i = 0; i < tracks; i++)
{
sourceLine[i].start();
}
int bytesRead = 0;
while(bytesRead != -1)
{
for(int i = 0; i < tracks; i++)
{
try
{
bytesRead = audioInputStream[i].read(buffer, 0, buffer.length);
}
catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
if(bytesRead >= 0)
{
int bytesWritten = sourceLine[i].write(buffer, 0, bytesRead);
System.out.println(bytesWritten);
}
}
}
}
};
Thread playThread = new Thread(playAudioMixRunner);
playThread.start();
}
}
The problem is that you are not adding the samples together. If we are looking at 4 tracks, 16-bit PCM data, you need to add all the different values together to "mix" them into one final output. So, from a purely-numbers point-of-view, it would look like this:
[Track1] 320 -16 2000 200 400
[Track2] 16 8 123 -87 91
[Track3] -16 -34 -356 1200 805
[Track4] 1011 1230 -1230 -100 19
[Final!] 1331 1188 537 1213 1315
In your above code, you should only be writing a single byte array. That byte array is the final mix of all tracks added together. The problem is that you are writing a byte array for each different track (so there is no mixdown happening, as you observed).
If you want to guarantee you don't have any "clipping", you should take the average of all tracks (so add all four tracks above and divide by 4). However, there are artifacts from choosing that approach (like if you have silence on three tracks and one loud track, the final output will be much quiter than the volume of the one track that is not silent). There are more complicated algorithms you can use to do the mixing, but by then you are writing your own mixer :P.

Categories