Unwanted downsampling : Java Sound - java

I have been trying to manually read a wav file in Java and read an array of bytes then write to an audio buffer for playback. I am receiving playback but it is heavily distorted. Java sound supports 16 bit sample rates but not 24-bit.
I went in to Logic 9 and exported a 24-bit audio file in to 16-bit and then used with my program. Originally, the 24-bit samples would produces white noise. Now I can hear my sample but very distorted and sounds like it has been bit crushed.
Can anyone help me to get a clean signal?
I am very new to audio programming but I am currently working on a basic Digital Audio Workstation.
import javax.sound.sampled.*;
import javax.sound.sampled.DataLine.Info;
import javax.swing.filechooser.FileNameExtensionFilter;
import java.io.*;
public class AudioData {
private String filepath;
private String filepath1;
private File file;
private byte [] fileContent;
private Mixer mixer;
private Mixer.Info[] mixInfos;
private AudioInputStream input;
private ByteArrayOutputStream byteoutput;
public static void main (String [] args) {
AudioData audiodata = new AudioData();
}
public AudioData () {
filepath = "/Users/ivaannagen/Documents/Samples/Engineering Samples - Obscure Techno Vol 3 (WAV)/ES_OT3_Kit03_Gmin_130bpm/ES_OT3_Kit03_FX_Fast_Snare_Riser_Gmin_130bpm.wav";
filepath1 = "/Users/ivaannagen/Documents/Samples/dawsampletest.wav";
file = new File (filepath1);
readAudio();
}
public void readAudio () {
mixInfos = AudioSystem.getMixerInfo();
mixer = AudioSystem.getMixer(mixInfos[0]);
AudioFormat format = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, 44100, 16, 2, 4, 44100, false);
// set up an audio format.
try {
DataLine.Info info = new DataLine.Info(SourceDataLine.class, format); // creates data line with class type and audio format.
SourceDataLine source = (SourceDataLine) AudioSystem.getLine(info);
System.out.println("Size of data line buffer: " + source.getBufferSize());
fileContent = new byte [source.getBufferSize() / 50];
byteoutput = new ByteArrayOutputStream();
input = AudioSystem.getAudioInputStream(file);
int readBytes = 0;
while ((readBytes = input.read(fileContent, 0, fileContent.length)) != -1) {
byteoutput.write(fileContent, 0, readBytes);
}
System.out.println("Size of audio buffer: " + fileContent.length);
//byteoutput.write(0);
// byteoutput.write(0);
System.out.println("Size of audio buffer: " + byteoutput.size());
source.open(format, source.getBufferSize()); // line must be open to be recognised by the mixer.
Line[] lines = mixer.getSourceLines();
System.out.println("mixer lines: " + lines.length);
// for(byte bytes: fileContent) {
// System.out.println(bytes);
// }
Thread playback = new Thread () {
public void run () {
// System.out.println((byteoutput.size() +2) % 4);
source.start(); // play (buffer originally empty)
source.write(byteoutput.toByteArray(), 0, byteoutput.size()); // write input bytes to output buffer
} // end run (to do).
}; // end thread action
playback.start(); // start thread
}
catch (LineUnavailableException lue) {
System.out.println(lue.getMessage());
}
catch (FileNotFoundException fnfe) {
System.out.println(fnfe.getMessage());
}
catch(IOException ioe) {
System.out.println(ioe.getMessage());
}
catch(UnsupportedAudioFileException uafe) {
System.out.println(uafe.getMessage());
}
}
}

Whether or not you can load and play a 24-bit file is system dependent, afaik.
I use Audacity for conversions. You should be able import your file into Audacity and export it as 16-bit, stereo, little-endian, 44100 fps, and then load that export with Java's AudioInputStream.
What you hear when playing from Audacity or from Java should be pretty much identical (adjusting for volume). If not, the most likely reason probably pertains to a mistake or overlook in the code, which is very easy to do.
The use of a ByteOutputStream in your code is superfluous. Read from the AudioInputStream into a fixed-size byte array (size being the buffer length, I recommend trying 8 or 16 * 1024 bytes as a first try) and then use the SourceDataLine write method to ship that array.
Following is code that works on my system for loading a playing a "CD Quality" wav called "a3.wav" that I have that is in the same directory as the Java class. You should be able to swap in your own 44100, 16-bit, stereo, little-endian wav file.
I've commented out an attempt to load and play a 24-bit wav file called "spoken8000_24.wav". That attempt gave me an IllegalArgumentException: No line matching interface SourceDataLine supporting format PCM_SIGNED 8000.0 Hz, 24 bit, stereo, 6 bytes/frame, little-endian is supported.
I have to admit, I'm unclear if my system doesn't provide the needed line or if I might have coded the format incorrectly! My OS can certainly play the file. So I'm thinking there is a distinction between what an OS can do and what a "Mixer" on a given system provides to Java.
As a get-around, I just always convert everything to "CD Quality" format, as that seems to be the most widely supported.
public class TriggerSound_SDL extends JFrame
{
public TriggerSound_SDL()
{
JButton button = new JButton("Play Sound");
button.addActionListener(e -> new Thread(() -> playBuzzer()).start());
getContentPane().add(button);
}
private void playBuzzer()
{
try
{
URL url;
url = getClass().getResource("a3.wav");
// url = getClass().getResource("spoken8000_24.wav");
AudioInputStream ais = AudioSystem.getAudioInputStream(url);
System.out.println(ais.getFormat());
AudioFormat audioFmt;
// "CD Quality" 44100 fps, 16-bit, stereo, little endian
audioFmt = new AudioFormat(
AudioFormat.Encoding.PCM_SIGNED,
44100, 16, 2, 4, 44100, false);
// 8000 fps, 32-bit, stereo
// audioFmt = new AudioFormat(
// AudioFormat.Encoding.PCM_SIGNED,
// 8000, 24, 2, 6, 8000, false);
Info info = new DataLine.Info(SourceDataLine.class,
audioFmt);
SourceDataLine sdl = (SourceDataLine)AudioSystem.getLine(info);
int bufferSize = 16 * 1024;
byte[] buffer = new byte[bufferSize];
sdl.open(audioFmt, bufferSize);
sdl.start();
int numBytesRead = 0;
while((numBytesRead = ais.read(buffer)) != -1)
{
sdl.write(buffer, 0, numBytesRead);
}
}
catch (IOException | UnsupportedAudioFileException
| LineUnavailableException ex)
{
ex.printStackTrace();
}
}
private static void createAndShowGUI()
{
JFrame frame = new TriggerSound_SDL();
frame.setDefaultCloseOperation(DISPOSE_ON_CLOSE);
frame.pack();
frame.setVisible(true);
}
public static void main(String[] args)
{
SwingUtilities.invokeLater(() -> createAndShowGUI());
}
}
This code, with some small tweaks should let you at least test the different formats.
EDIT:
I'm seeing where your goal is to make a DAW!
In that case, you will want to convert the bytes to PCM data. Can I suggest you borrow some code from AudioCue? I basically wrote it to be a Clip-substitute, and part of that involved making the PCM data available for manipulation. Some techniques for mixing, playing back at different frequencies, multithreading can be found in it.

Thanks for all the advice guys. I will be getting rid of the ByteOutputStream and just use the AudioInputStream, I now understand what I was doing was unnecessary!! Thanks for the advice all! I have indeed tried using AudioCue but it is not low level enough for what I want to do!
One more thing guys. Previously, I created a multitrack media player which is using the Clip class. To play all the audio tracks together, I was looping through a list of Clips and playing them. However, this means that all tracks may be playing a tiny amount after each other due to the processing of the loop. Also, Clip class created a new thread per audio. I do not wants 100 threads running on 100 tracks, I want one thread for my audio output. I am still trying to work out how to start all tracks at the same time without a loop....(im guessing AudioCue have nailed the concurrent cues).
Does anyone know the best way to play multiple audio tracks in to one output? Do I need to route/bus all my audio tracks in to one output and somehow write all data from audio files in to one output buffer then play this output in a thread?
Thanks!!

Related

How to get byte data from audio file (must be as signed int / bytes) in Flutter/Dart like AudioInputStream gives you in java

Good day. I'm trying to create a music identification app (like Shazam) in Flutter (I'm also new to Flutter) and I want it to run on mobile and desktop.
I have this piece of code in Java that gives me back a byte array with the time domain values in it :
File soundFile;
AudioInputStream audioStream;
AudioFormat audioFormat;
SourceDataLine sourceLine;
int check = 0;
byte[] songBytes;
DataLine.Info info;
soundFile = new File("./testWave.wav");
songBytes = new byte[(int) soundFile.length()];
audioStream = AudioSystem.getAudioInputStream(soundFile);
audioFormat = audioStream.getFormat();
info = new DataLine.Info(SourceDataLine.class, audioFormat);
sourceLine = (SourceDataLine) AudioSystem.getLine(info);
sourceLine.open(audioFormat);
sourceLine.start();
while (check > -1) {
check = audioStream.read(songBytes, 0, songBytes.length);
}
sourceLine.drain();
sourceLine.close();
for (int i = 0; i < songBytes.length; i++) {
System.out.println(songBytes[i]);
}
I have searched and could not find any way to do this in Flutter/Dart. Can anyone please give me guidance on whats the best way of doing this in Flutter/Dart if it is possible and if not can you please advise me on the best method of doing this
Let's say your WAV header is 74 bytes long. (It will vary according to the number of sections, so really you need to parse it to determine that. But for any one source of WAV files it will often be the same number - use a hex dump to determine the offset of the data block plus 4.)
(By parsing the header you can find out other things like the sample rate and whether it's mono or stereo, etc.)
Then, if bytes is the Uint8List, you need bytes.buffer.asInt16List(74). This means: interpret the buffer backing the bytes as signed shorts, but starting at offset 74 - after the header.
var dataOffset = 74; // parse the WAV header or determine from a hex dump
var bytes = await file.readAsBytes();
var shorts = bytes.buffer.asInt16List(dataOffset);
print(shorts[0]); // the first sample of audio
print(shorts.length); // the number of audio samples

Java audio - trim an audio file down to a specified length

I am trying to create a small java program to cut an audio file down to a specified length. Currently I have the following code:-
import java.util.*;
import java.io.*;
import javax.sound.sampled.*;
public class cuttest_3{
public static void main(String[]args)
{
int totalFramesRead = 0;
File fileIn = new File("output1.wav");
// somePathName is a pre-existing string whose value was
// based on a user selection.
try {
AudioInputStream audioInputStream =
AudioSystem.getAudioInputStream(fileIn);
int bytesPerFrame =
audioInputStream.getFormat().getFrameSize();
if (bytesPerFrame == AudioSystem.NOT_SPECIFIED) {
// some audio formats may have unspecified frame size
// in that case we may read any amount of bytes
bytesPerFrame = 1;
}
// Set a buffer size of 5512 frames - semiquavers at 120bpm
int numBytes = 5512 * bytesPerFrame;
byte[] audioBytes = new byte[numBytes];
try {
int numBytesRead = 0;
int numFramesRead = 0;
// Try to read numBytes bytes from the file.
while ((numBytesRead =
audioInputStream.read(audioBytes)) != -1) {
// Calculate the number of frames actually read.
numFramesRead = numBytesRead / bytesPerFrame;
totalFramesRead += numFramesRead;
// Here, - output a trimmed audio file
AudioInputStream cutFile =
new AudioInputStream(audioBytes);
AudioSystem.write(cutFile,
AudioFileFormat.Type.WAVE,
new File("cut_output1.wav"));
}
} catch (Exception ex) {
// Handle the error...
}
} catch (Exception e) {
// Handle the error...
}
}
}
On attempting compilation, the following error is returned:-
cuttest_3.java:50: error: incompatible types: byte[] cannot be converted to TargetDataLine
new AudioInputStream(audioBytes);
I am not very familiar with AudioInputStream handling in Java, so can anyone suggest a way I can conform the data to achieve output? Many thanks
You have to tell the AudioInputStream how to decipher the bytes you pass in as is specified by Matt in the answer here. This documentation indicates what each of the parameters mean.
A stream of bytes does not mean anything until you indicate to the system playing the sound how many channels there are, the bit resolution per sample, samples per second, etc.
Since .wav files are an understood protocol and I think they have data at the front of the file defining various parameters of the audio track, the AudioInputStream can correctly decipher the 1st file you pass in.

How to play a binary stream in Java

This question is probably not even going to be asked correctly, but I promise you I'm doing my best. Here's the scenario: I wrote a little Java app that receives an audio stream from a server. Now, when I redirect the binary stream to a file, and pipe that file to mplayer, the audio is played correctly. What I want now though is to play the audio from my own app. Here's what I got so far:
The codec mplayer uses to play the stream:
AUDIO: 22050 Hz, 2 ch, s16le, 0.0 kbit/0.00% (ratio: 0->88200)
Selected audio codec: [ffaac] afm: ffmpeg (FFmpeg AAC (MPEG-2/MPEG-4 Audio))
What I coded so far:
public class StreamPlayer {
public final AudioFormat audioFormat;
public final DataLine.Info info;
public final SourceDataLine soundLine;
public StreamPlayer() throws LineUnavailableException {
audioFormat = new AudioFormat(22050, 16, 2, true, true);
info = new DataLine.Info(SourceDataLine.class, audioFormat, 1500);
soundLine = (SourceDataLine) AudioSystem.getLine(info);
}
public void startSoundLine() throws LineUnavailableException {
soundLine.open(audioFormat);
soundLine.start();
}
public void playStream(byte[] buffer) {
soundLine.write(buffer, 0, buffer.length);
}
}
and I call the playStream function for every received packet. No errors are reported, and no sound is heard. Am I even close to doing this, or off by a long shot?
P.S. I found some third party libraries on google, but I'd really like to keep them as a last resort.
Thank you!
Get your Bytes into a ByteArrayInputStream and feed this into the streamplayer instead...
ByteArrayInputStream audioStream = new ByteArrayInputStream(buffer);

Concatenation of two WAV files failed

I have this simple code to concatenate two wav files. Its pretty simple and the code runs without any errors. But there is a problem with the output file. The output file generated does not play, and surprisingly its size is only 44 bytes whereas my input files "a.wav" & "b.wav" are both more than 500Kb in size.
Here is my code:
import java.io.File;
import java.io.IOException;
import java.io.SequenceInputStream;
import javax.sound.sampled.AudioFileFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
public class WavConcat {
public static void main(String[] args) {
String wFile1 = "./sounds/a.wav";
String wFile2 = "./sounds/b.wav";
try {
AudioInputStream clip1 = AudioSystem.getAudioInputStream(new File(wFile1));
AudioInputStream clip2 = AudioSystem.getAudioInputStream(new File(wFile2));
AudioInputStream appendedFiles =
new AudioInputStream(
new SequenceInputStream(clip1, clip2),
clip1.getFormat(),
clip1.getFrameLength() + clip2.getFrameLength());
AudioSystem.write(appendedFiles,
AudioFileFormat.Type.WAVE,new File("./sounds/ab.wav"));
} catch (Exception e) {
e.printStackTrace();
}
}
}
Try this kind of structure. This worked for me
List audioInputStreamList = new ArrayList();
String wFile1 = "./sounds/a.wav";
String wFile2 = "./sounds/b.wav";
AudioInputStream audioInputStream1 = AudioSystem.getAudioInputStream(new File(wFile1));
AudioInputStream audioInputStream2 = AudioSystem.getAudioInputStream(new File(wFile2));
audioInputStreamList.add(audioInputStream1);
audioInputStreamList.add(audioInputStream2);
AudioFormat audioFormat = audioInputStream1.getFormat(); // audioInputStream2 format also same
AudioInputStream udioInputStream = new SequenceAudioInputStream(audioFormat,audioInputStreamList);
AudioSystem.write(audioInputStream, AudioFileFormat.Type.WAVE,new File("./sounds/ab.wav"));
UPDATE
check for SequenceAudioInputStream
clip1.getFormat() returns-->
MPEG2L3 24000.0 Hz, unknown bits per sample, mono, unknown frame size, 41.666668 frames/second
clip2.getFormat() returns-->
MPEG2L3 24000.0 Hz, unknown bits per sample, mono, unknown frame size, 41.666668 frames/second
That is an odd format. I can imagine the 'unknown bits per sample' is causing a problem, but also the MPEG2L3, since JavaSound has no inbuilt encoder for MP3. It seems like they are not encoded properly. Try loading them in sound editing software and save them as a type of WAV or AU that Java Sound can understand 'out of the box'. Hopefully the editing software:
Can understand the broken MP3, and..
Will write a valid WAV or AU.
If you can convert them to 8 bit mono & 8KHz during the conversion, it might reduce the byte[] size by a factor of 6 to 1. 8KHz is considered good enough to understand speech, and for this use you need to serve the bytes of the combined sound out to the browser - so reducing it in size is crucial.

Java SFXR Port - Trouble writing byte[] to WAV file

I'm using a Java port of the sound effect generator SFXR, which involves lots of arcane music code that I don't understand, being something of a novice when it comes to anything to do with audio. What I do know is that the code can reliably generate and play sounds within Java, using a SourceDataLine object.
The data that the SDL object uses is stored in a byte[]. However, simply writing this out to a file doesn't work (presumably because of the lack of a WAV header, or so I thought).
However, I downloaded this WAV read/write class: http://computermusicblog.com/blog/2008/08/29/reading-and-writing-wav-files-in-java/ which adds in header information when it writes a WAV file. Giving it the byte[] data from SFXR still produces files that can't be played by any music player I have.
I figure I must be missing something. Here's the relevant code when it plays the sound data:
public void play(int millis) throws Exception {
AudioFormat stereoFormat = getStereoAudioFormat();
SourceDataLine stereoSdl = AudioSystem.getSourceDataLine(stereoFormat);
if (!stereoSdl.isOpen()) {
try {
stereoSdl.open();
} catch (LineUnavailableException e) {
e.printStackTrace();
}
}
if (!stereoSdl.isRunning()) {
stereoSdl.start();
}
double seconds = millis / 1000.0;
int bufferSize = (int) (4 * 41000 * seconds);
byte[] target = new byte[bufferSize];
writeBytes(target);
stereoSdl.write(target, 0, target.length);
}
That's from the SFXR port. Here's the save() file from the WavIO class (there's a lot of other code in that class of course, I figured this might be worth posting in case someone wants to see exactly how the buffer data is being handled:
public boolean save()
{
try
{
DataOutputStream outFile = new DataOutputStream(new FileOutputStream(myPath));
// write the wav file per the wav file format
outFile.writeBytes("RIFF"); // 00 - RIFF
outFile.write(intToByteArray((int)myChunkSize), 0, 4); // 04 - how big is the rest of this file?
outFile.writeBytes("WAVE"); // 08 - WAVE
outFile.writeBytes("fmt "); // 12 - fmt
outFile.write(intToByteArray((int)mySubChunk1Size), 0, 4); // 16 - size of this chunk
outFile.write(shortToByteArray((short)myFormat), 0, 2); // 20 - what is the audio format? 1 for PCM = Pulse Code Modulation
outFile.write(shortToByteArray((short)myChannels), 0, 2); // 22 - mono or stereo? 1 or 2? (or 5 or ???)
outFile.write(intToByteArray((int)mySampleRate), 0, 4); // 24 - samples per second (numbers per second)
outFile.write(intToByteArray((int)myByteRate), 0, 4); // 28 - bytes per second
outFile.write(shortToByteArray((short)myBlockAlign), 0, 2); // 32 - # of bytes in one sample, for all channels
outFile.write(shortToByteArray((short)myBitsPerSample), 0, 2); // 34 - how many bits in a sample(number)? usually 16 or 24
outFile.writeBytes("data"); // 36 - data
outFile.write(intToByteArray((int)myDataSize), 0, 4); // 40 - how big is this data chunk
outFile.write(myData); // 44 - the actual data itself - just a long string of numbers
}
catch(Exception e)
{
System.out.println(e.getMessage());
return false;
}
return true;
}
All I know is, I've got a bunch of data, and I want it to end up in a playable audio file of some kind (at this point I'd take ANY format!). What's the best way for me to get this byte buffer into a playable file? Or is this byte[] not what I think it is?
I do not get much chance to play with the sound capabilities of Java so I'm using your question as a learning exercise (I hope you don't mind). The article that you referenced about Reading and Writing WAV Files in Java is very old in relation to Java history (1998). Also something about constructing the WAV header by hand didn't sit quite right with me (it seemed a little error prone). As Java is quite a mature language now I would expect library support for this kind of thing.
I was able to construct a WAV file from a byte array by hunting around the internet for sample code snippets. This is the code that I came up with (I expect it is sub-optimal but it seems to work):
// Generate bang noise data
// Sourced from http://www.rgagnon.com/javadetails/java-0632.html
public static byte[] bang() {
byte[] buf = new byte[8050];
Random r = new Random();
boolean silence = true;
for (int i = 0; i < 8000; i++) {
while (r.nextInt() % 10 != 0) {
buf[i] =
silence ? 0
: (byte) Math.abs(r.nextInt()
% (int) (1. + 63. * (1. + Math.cos(((double) i)
* Math.PI / 8000.))));
i++;
}
silence = !silence;
}
return buf;
}
private static void save(byte[] data, String filename) throws IOException, LineUnavailableException, UnsupportedAudioFileException {
InputStream byteArray = new ByteArrayInputStream(data);
AudioInputStream ais = new AudioInputStream(byteArray, getAudioFormat(), (long) data.length);
AudioSystem.write(ais, AudioFileFormat.Type.WAVE, new File(filename));
}
private static AudioFormat getAudioFormat() {
return new AudioFormat(
8000f, // sampleRate
8, // sampleSizeInBits
1, // channels
true, // signed
false); // bigEndian
}
public static void main(String[] args) throws Exception {
byte[] data = bang();
save(data, "test.wav");
}
I hope it helps.

Categories