In my project I need to open the SEQ file, so I use FileInputStream, it requires to load data to byte array. But because of that each pixels get wrong value (cause they are Integers).
Below in my code you can see that i put pixels in 2d array and for that I count each value of pixel, in line:
wart =(int) (buf[offset]) +(int)(buf[offset+1]) * 255;
I know that values because of byte input format are wrong (first two pixels aka double should be 152,109692453756 and 152,068644316116 but in my Java function they get -2474, -690)
I tried using the mask:
wart =(int) (buf[offset]<< 8) & 0x0000ff00 +(int)(buf[offset+1])& 0x000000ff * 255 ;
it helps a little (values arent negative, but they are "shifted" too much (first two pixels 19456, 18944)
I don't know how to solve this problem. I know that the mask should be different, but I don't know how to set it.
public class Sekwencja2 {
#SuppressWarnings("empty-statement")
public double[] sekwencja2(String nazwa,int nr_klatki) throws FileNotFoundException, IOException{
InputStream is = null;
DataInputStream dis = null;
is = new FileInputStream(nazwa);
dis = new DataInputStream(is);
int length = dis.available();
byte[] buf = new byte[length];
dis.readFully(buf);
int l_klatek = ((length-158864)/158864)+1;
int width = 320;
int height = 240;
int C1=21764040;
double C2=3033.3;
double C3=134.06;
int z = 0;
double[] oneDArray = new double[width*height];
double [][] pixels = new double[width][height];
int offset =0;
char type;
String typeText;
type=(char)buf[0];
typeText =Character.toString(type);
switch (typeText) {
case "A":
if(nr_klatki == 1)
offset= 696;
else
offset = 158136+(nr_klatki-1)*569+(nr_klatki-2)*(320*240*2+3839);
break;
case "F":
offset=(nr_klatki-1)*158864 + 1373;
break;
}
int wart = 0 ;
for(int x = 0; x<320; x++){
for (int y = 0; y<240;y++){
switch (typeText){
case "A":
if(nr_klatki==1)
wart =(int) (buf[offset]) +(int)(buf[offset+1]) * 255;
else
wart = (int)(buf[offset]<< 8)& 0x0000ff00 +(int)(buf[offset+1])&0xff*255 ;
break;
case "F":
wart = (buf[offset]<< 8)& 0x0000ff00 +(buf[offset+1])& 0x000000ff * 255 ;
break;
}
System.out.print(", "+wart);
pixels[x][y]=wart;
offset = offset+2;
}
}
for(int i = 0; i < width; i ++)
{
System.arraycopy(pixels[i], 0, oneDArray, i * height, height);
}
return oneDArray;
}
}
I know it's a mess, a lot of things are commented :)
255 is wrong
it's 256, you always have to multiply by a multiple of the base you are operating, 255 isn't a multiple of 2
analogy:
convert 111 from base 10 to base 10 in your way
1*99 + 1*9 + 1 = 109
so 109 != 111 which is wrong, likewise multiplying by 255 will alter any number you try to convert from binary to binary.
Mask first, like this:
wart = (buf[offset] & 0xFF) | ((buf[offset+1] & 0xFF) << 8);
Here's what I'm working with right now:
for (int i = 0, numSamples = soundBytes.length / 2; i < numSamples; i += 2)
{
// Get the samples.
int sample1 = ((soundBytes[i] & 0xFF) << 8) | (soundBytes[i + 1] & 0xFF); // Automatically converts to unsigned int 0...65535
int sample2 = ((outputBytes[i] & 0xFF) << 8) | (outputBytes[i + 1] & 0xFF); // Automatically converts to unsigned int 0...65535
// Normalize for simplicity.
float normalizedSample1 = sample1 / 65535.0f;
float normalizedSample2 = sample2 / 65535.0f;
float normalizedMixedSample = 0.0f;
// Apply the algorithm.
if (normalizedSample1 < 0.5f && normalizedSample2 < 0.5f)
normalizedMixedSample = 2.0f * normalizedSample1 * normalizedSample2;
else
normalizedMixedSample = 2.0f * (normalizedSample1 + normalizedSample2) - (2.0f * normalizedSample1 * normalizedSample2) - 1.0f;
int mixedSample = (int)(normalizedMixedSample * 65535);
// Replace the sample in soundBytes array with this mixed sample.
soundBytes[i] = (byte)((mixedSample >> 8) & 0xFF);
soundBytes[i + 1] = (byte)(mixedSample & 0xFF);
}
From as far as I can tell, it's an accurate representation of the algorithm defined on this page: http://www.vttoth.com/CMS/index.php/technical-notes/68
However, just mixing a sound with silence (all 0's) results in a sound that very obviously doesn't sound right, maybe it's best to describe it as higher-pitched and louder.
Would appreciate help in determining if I'm implementing the algorithm correctly, or if I simply need to go about it a different way (different algorithm/method)?
In the linked article the author assumes A and B to represent entire streams of audio. More specifically X means the maximum abs value of all of the samples in stream X - where X is either A or B. So what his algorithm does is scans the entirety of both streams to compute the max abs sample of each and then scales things so that the output theoretically peaks at 1.0. You'll need to make multiple passes over the data in order to implement this algorithm and if your data is streaming in then it simply will not work.
Here is an example of how I think the algorithm to work. It assumes that the samples have already been converted to floating point to side step the issue of your conversion code being wrong. I'll explain what is wrong with it later:
double[] samplesA = ConvertToDoubles(samples1);
double[] samplesB = ConvertToDoubles(samples2);
double A = ComputeMax(samplesA);
double B = ComputeMax(samplesB);
// Z always equals 1 which is an un-useful bit of information.
double Z = A+B-A*B;
// really need to find a value x such that xA+xB=1, which I think is:
double x = 1 / (Math.sqrt(A) * Math.sqrt(B));
// Now mix and scale the samples
double[] samples = MixAndScale(samplesA, samplesB, x);
Mixing and scaling:
double[] MixAndScale(double[] samplesA, double[] samplesB, double scalingFactor)
{
double[] result = new double[samplesA.length];
for (int i = 0; i < samplesA.length; i++)
result[i] = scalingFactor * (samplesA[i] + samplesB[i]);
}
Computing the max peak:
double ComputeMaxPeak(double[] samples)
{
double max = 0;
for (int i = 0; i < samples.length; i++)
{
double x = Math.abs(samples[i]);
if (x > max)
max = x;
}
return max;
}
And conversion. Notice how I'm using short so that the sign bit is properly maintained:
double[] ConvertToDouble(byte[] bytes)
{
double[] samples = new double[bytes.length/2];
for (int i = 0; i < samples.length; i++)
{
short tmp = ((short)bytes[i*2])<<8 + ((short)(bytes[i*2+1]);
samples[i] = tmp / 32767.0;
}
return samples;
}
This question is usually asked as a part of another question but it turns out that the answer is long. I've decided to answer it here so I can link to it elsewhere.
Although I'm not aware of a way that Java can produce audio samples for us at this time, if that changes in the future, this can be a place for it. I know that JavaFX has some stuff like this, for example AudioSpectrumListener, but still not a way to access samples directly.
I'm using javax.sound.sampled for playback and/or recording but I'd like to do something with the audio.
Perhaps I'd like to display it visually or process it in some way.
How do I access audio sample data to do that with Java Sound?
See also:
Java Sound Tutorials (Official)
Java Sound Resources (Unofficial)
Well, the simplest answer is that at the moment Java can't produce sample data for the programmer.
This quote is from the official tutorial:
There are two ways to apply signal processing:
You can use any processing supported by the mixer or its component lines, by querying for Control objects and then setting the controls as the user desires. Typical controls supported by mixers and lines include gain, pan, and reverberation controls.
If the kind of processing you need isn't provided by the mixer or its lines, your program can operate directly on the audio bytes, manipulating them as desired.
This page discusses the first technique in greater detail, because there is no special API for the second technique.
Playback with javax.sound.sampled largely acts as a bridge between the file and the audio device. The bytes are read in from the file and sent off.
Don't assume the bytes are meaningful audio samples! Unless you happen to have an 8-bit AIFF file, they aren't. (On the other hand, if the samples are definitely 8-bit signed, you can do arithmetic with them. Using 8-bit is one way to avoid the complexity described here, if you're just playing around.)
So instead, I'll enumerate the types of AudioFormat.Encoding and describe how to decode them yourself. This answer will not cover how to encode them, but it's included in the complete code example at the bottom. Encoding is mostly just the decoding process in reverse.
This is a long answer but I wanted to give a thorough overview.
A Little About Digital Audio
Generally when digital audio is explained, we're referring to Linear Pulse-Code Modulation (LPCM).
A continuous sound wave is sampled at regular intervals and the amplitudes are quantized to integers of some scale.
Shown here is a sine wave sampled and quantized to 4-bit:
(Notice that the most positive value in two's complement representation is 1 less than the most negative value. This is a minor detail to be aware of. For example if you're clipping audio and forget this, the positive clips will overflow.)
When we have audio on the computer, we have an array of these samples. A sample array is what we want to turn the byte array in to.
To decode PCM samples, we don't care much about the sample rate or number of channels, so I won't be saying much about them here. Channels are usually interleaved, so that if we had an array of them, they'd be stored like this:
Index 0: Sample 0 (Left Channel)
Index 1: Sample 0 (Right Channel)
Index 2: Sample 1 (Left Channel)
Index 3: Sample 1 (Right Channel)
Index 4: Sample 2 (Left Channel)
Index 5: Sample 2 (Right Channel)
...
In other words, for stereo, the samples in the array just alternate between left and right.
Some Assumptions
All of the code examples will assume the following declarations:
byte[] bytes; The byte array, read from the AudioInputStream.
float[] samples; The output sample array that we're going to fill.
float sample; The sample we're currently working on.
long temp; An interim value used for general manipulation.
int i; The position in the byte array where the current sample's data starts.
We'll normalize all of the samples in our float[] array to the range of -1f <= sample <= 1f. All of the floating-point audio I've seen comes this way and it's pretty convenient.
If our source audio doesn't already come like that (as is for e.g. integer samples), we can normalize them ourselves using the following:
sample = sample / fullScale(bitsPerSample);
Where fullScale is 2bitsPerSample - 1, i.e. Math.pow(2, bitsPerSample-1).
How do I coerce the byte array in to meaningful data?
The byte array contains the sample frames split up and all in a line. This is actually very straight-forward except for something called endianness, which is the ordering of the bytes in each sample packet.
Here's a diagram. This sample (packed in to a byte array) holds the decimal value 9999:
24-bit sample as big-endian:
bytes[i] bytes[i + 1] bytes[i + 2]
┌──────┐ ┌──────┐ ┌──────┐
00000000 00100111 00001111
24-bit sample as little-endian:
bytes[i] bytes[i + 1] bytes[i + 2]
┌──────┐ ┌──────┐ ┌──────┐
00001111 00100111 00000000
They hold the same binary values; however, the byte orders are reversed.
In big-endian, the more significant bytes come before the less significant bytes.
In little-endian, the less significant bytes come before the more significant bytes.
WAV files are stored in little-endian order and AIFF files are stored in big-endian order. Endianness can be obtained from AudioFormat.isBigEndian.
To concatenate the bytes and put them in to our long temp variable, we:
Bitwise AND each byte with the mask 0xFF (which is 0b1111_1111) to avoid sign-extension when the byte is automatically promoted. (char, byte and short are promoted to int when arithmetic is performed on them.) See also What does value & 0xff do in Java?
Bit shift each byte in to position.
Bitwise OR the bytes together.
Here's a 24-bit example:
long temp;
if (isBigEndian) {
temp = (
((bytes[i ] & 0xffL) << 16)
| ((bytes[i + 1] & 0xffL) << 8)
| (bytes[i + 2] & 0xffL)
);
} else {
temp = (
(bytes[i ] & 0xffL)
| ((bytes[i + 1] & 0xffL) << 8)
| ((bytes[i + 2] & 0xffL) << 16)
);
}
Notice that the shift order is reversed based on endianness.
This can also be generalized to a loop, which can be seen in the full code at the bottom of this answer. (See the unpackAnyBit and packAnyBit methods.)
Now that we have the bytes concatenated together, we can take a few more steps to turn them in to a sample. The next steps depend on the actual encoding.
How do I decode Encoding.PCM_SIGNED?
The two's complement sign must be extended. This means that if the most significant bit (MSB) is set to 1, we fill all the bits above it with 1s. The arithmetic right-shift (>>) will do the filling for us automatically if the sign bit is set, so I usually do it this way:
int bitsToExtend = Long.SIZE - bitsPerSample;
float sample = (temp << bitsToExtend) >> bitsToExtend.
(Where Long.SIZE is 64. If our temp variable wasn't a long, we'd use something else. If we used e.g. int temp instead, we'd use 32.)
To understand how this works, here's a diagram of sign-extending 8-bit to 16-bit:
11111111 is the byte value -1, but the upper bits of the short are 0.
Shift the byte's MSB in to the MSB position of the short.
0000 0000 1111 1111
<< 8
───────────────────
1111 1111 0000 0000
Shift it back and the right-shift fills all the upper bits with 1s.
We now have the short value of -1.
1111 1111 0000 0000
>> 8
───────────────────
1111 1111 1111 1111
Positive values (that had a 0 in the MSB) are left unchanged. This is a nice property of the arithmetic right-shift.
Then normalize the sample, as described in Some Assumptions.
You might not need to write explicit sign-extension if your code is simple
Java does sign-extension automatically when converting from one integral type to a larger type, for example byte to int. If you know that your input and output format are always signed, you can use the automatic sign-extension while concatenating bytes in the earlier step.
Recall from the section above (How do I coerce the byte array in to meaningful data?) that we used b & 0xFF to prevent sign-extension from occurring. If you just remove the & 0xFF from the highest byte, sign-extension will happen automatically.
For example, the following decodes signed, big-endian, 16-bit samples:
for (int i = 0; i < bytes.length; i++) {
int sample = (bytes[i] << 8) // high byte is sign-extended
| (bytes[i + 1] & 0xFF); // low byte is not
// ...
}
How do I decode Encoding.PCM_UNSIGNED?
We turn it in to a signed number. Unsigned samples are simply offset so that, for example:
An unsigned value of 0 corresponds to the most negative signed value.
An unsigned value of 2bitsPerSample - 1 corresponds to the signed value of 0.
An unsigned value of 2bitsPerSample corresponds to the most positive signed value.
So this turns out to be pretty simple. Just subtract the offset:
float sample = temp - fullScale(bitsPerSample);
Then normalize the sample, as described in Some Assumptions.
How do I decode Encoding.PCM_FLOAT?
This is new since Java 7.
In practice, floating-point PCM is typically either IEEE 32-bit or IEEE 64-bit and already normalized to the range of ±1.0. The samples can be obtained with the utility methods Float#intBitsToFloat and Double#longBitsToDouble.
// IEEE 32-bit
float sample = Float.intBitsToFloat((int) temp);
// IEEE 64-bit
double sampleAsDouble = Double.longBitsToDouble(temp);
float sample = (float) sampleAsDouble; // or just use double for arithmetic
How do I decode Encoding.ULAW and Encoding.ALAW?
These are companding compression codecs that are more common in telephones and such. They're supported by javax.sound.sampled I assume because they're used by Sun's Au format. (However, it's not limited to just this type of container. For example, WAV can contain these encodings.)
You can conceptualize A-law and μ-law like they're a floating-point format. These are PCM formats but the range of values is non-linear.
There are two ways to decode them. I'll show the way which uses the mathematical formula. You can also decode them by manipulating the binary directly which is described in this blog post but it's more esoteric-looking.
For both, the compressed data is 8-bit. Standardly A-law is 13-bit when decoded and μ-law is 14-bit when decoded; however, applying the formula yields a range of ±1.0.
Before you can apply the formula, there are three things to do:
Some of the bits are standardly inverted for storage due to reasons involving data integrity.
They're stored as sign and magnitude (rather than two's complement).
The formula also expects a range of ±1.0, so the 8-bit value has to be scaled.
For μ-law all the bits are inverted, so:
temp ^= 0xffL; // 0xff == 0b1111_1111
(Note that we can't use ~, because we don't want to invert the high bits of the long.)
For A-law, every other bit is inverted, so:
temp ^= 0x55L; // 0x55 == 0b0101_0101
(XOR can be used to do inversion. See How do you set, clear and toggle a bit?)
To convert from sign and magnitude to two's complement, we:
Check to see if the sign bit was set.
If so, clear the sign bit and negate the number.
// 0x80 == 0b1000_0000
if ((temp & 0x80L) != 0) {
temp ^= 0x80L;
temp = -temp;
}
Then scale the encoded numbers, the same way as described in Some Assumptions:
sample = temp / fullScale(8);
Now we can apply the expansion.
The μ-law formula translated to Java is then:
sample = (float) (
signum(sample)
*
(1.0 / 255.0)
*
(pow(256.0, abs(sample)) - 1.0)
);
The A-law formula translated to Java is then:
float signum = signum(sample);
sample = abs(sample);
if (sample < (1.0 / (1.0 + log(87.7)))) {
sample = (float) (
sample * ((1.0 + log(87.7)) / 87.7)
);
} else {
sample = (float) (
exp((sample * (1.0 + log(87.7))) - 1.0) / 87.7
);
}
sample = signum * sample;
Here's the full example code for the SimpleAudioConversion class.
package mcve.audio;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioFormat.Encoding;
import static java.lang.Math.*;
/**
* <p>Performs simple audio format conversion.</p>
*
* <p>Example usage:</p>
*
* <pre>{#code AudioInputStream ais = ... ;
* SourceDataLine line = ... ;
* AudioFormat fmt = ... ;
*
* // do setup
*
* for (int blen = 0; (blen = ais.read(bytes)) > -1;) {
* int slen;
* slen = SimpleAudioConversion.decode(bytes, samples, blen, fmt);
*
* // do something with samples
*
* blen = SimpleAudioConversion.encode(samples, bytes, slen, fmt);
* line.write(bytes, 0, blen);
* }}</pre>
*
* #author Radiodef
* #see Overview on Stack Overflow
*/
public final class SimpleAudioConversion {
private SimpleAudioConversion() {}
/**
* Converts from a byte array to an audio sample float array.
*
* #param bytes the byte array, filled by the AudioInputStream
* #param samples an array to fill up with audio samples
* #param blen the return value of AudioInputStream.read
* #param fmt the source AudioFormat
*
* #return the number of valid audio samples converted
*
* #throws NullPointerException if bytes, samples or fmt is null
* #throws ArrayIndexOutOfBoundsException
* if bytes.length is less than blen or
* if samples.length is less than blen / bytesPerSample(fmt.getSampleSizeInBits())
*/
public static int decode(byte[] bytes,
float[] samples,
int blen,
AudioFormat fmt) {
int bitsPerSample = fmt.getSampleSizeInBits();
int bytesPerSample = bytesPerSample(bitsPerSample);
boolean isBigEndian = fmt.isBigEndian();
Encoding encoding = fmt.getEncoding();
double fullScale = fullScale(bitsPerSample);
int i = 0;
int s = 0;
while (i < blen) {
long temp = unpackBits(bytes, i, isBigEndian, bytesPerSample);
float sample = 0f;
if (encoding == Encoding.PCM_SIGNED) {
temp = extendSign(temp, bitsPerSample);
sample = (float) (temp / fullScale);
} else if (encoding == Encoding.PCM_UNSIGNED) {
temp = unsignedToSigned(temp, bitsPerSample);
sample = (float) (temp / fullScale);
} else if (encoding == Encoding.PCM_FLOAT) {
if (bitsPerSample == 32) {
sample = Float.intBitsToFloat((int) temp);
} else if (bitsPerSample == 64) {
sample = (float) Double.longBitsToDouble(temp);
}
} else if (encoding == Encoding.ULAW) {
sample = bitsToMuLaw(temp);
} else if (encoding == Encoding.ALAW) {
sample = bitsToALaw(temp);
}
samples[s] = sample;
i += bytesPerSample;
s++;
}
return s;
}
/**
* Converts from an audio sample float array to a byte array.
*
* #param samples an array of audio samples to encode
* #param bytes an array to fill up with bytes
* #param slen the return value of the decode method
* #param fmt the destination AudioFormat
*
* #return the number of valid bytes converted
*
* #throws NullPointerException if samples, bytes or fmt is null
* #throws ArrayIndexOutOfBoundsException
* if samples.length is less than slen or
* if bytes.length is less than slen * bytesPerSample(fmt.getSampleSizeInBits())
*/
public static int encode(float[] samples,
byte[] bytes,
int slen,
AudioFormat fmt) {
int bitsPerSample = fmt.getSampleSizeInBits();
int bytesPerSample = bytesPerSample(bitsPerSample);
boolean isBigEndian = fmt.isBigEndian();
Encoding encoding = fmt.getEncoding();
double fullScale = fullScale(bitsPerSample);
int i = 0;
int s = 0;
while (s < slen) {
float sample = samples[s];
long temp = 0L;
if (encoding == Encoding.PCM_SIGNED) {
temp = (long) (sample * fullScale);
} else if (encoding == Encoding.PCM_UNSIGNED) {
temp = (long) (sample * fullScale);
temp = signedToUnsigned(temp, bitsPerSample);
} else if (encoding == Encoding.PCM_FLOAT) {
if (bitsPerSample == 32) {
temp = Float.floatToRawIntBits(sample);
} else if (bitsPerSample == 64) {
temp = Double.doubleToRawLongBits(sample);
}
} else if (encoding == Encoding.ULAW) {
temp = muLawToBits(sample);
} else if (encoding == Encoding.ALAW) {
temp = aLawToBits(sample);
}
packBits(bytes, i, temp, isBigEndian, bytesPerSample);
i += bytesPerSample;
s++;
}
return i;
}
/**
* Computes the block-aligned bytes per sample of the audio format,
* using Math.ceil(bitsPerSample / 8.0).
* <p>
* Round towards the ceiling because formats that allow bit depths
* in non-integral multiples of 8 typically pad up to the nearest
* integral multiple of 8. So for example, a 31-bit AIFF file will
* actually store 32-bit blocks.
*
* #param bitsPerSample the return value of AudioFormat.getSampleSizeInBits
* #return The block-aligned bytes per sample of the audio format.
*/
public static int bytesPerSample(int bitsPerSample) {
return (int) ceil(bitsPerSample / 8.0); // optimization: ((bitsPerSample + 7) >>> 3)
}
/**
* Computes the largest magnitude representable by the audio format,
* using Math.pow(2.0, bitsPerSample - 1). Note that for two's complement
* audio, the largest positive value is one less than the return value of
* this method.
* <p>
* The result is returned as a double because in the case that
* bitsPerSample is 64, a long would overflow.
*
* #param bitsPerSample the return value of AudioFormat.getBitsPerSample
* #return the largest magnitude representable by the audio format
*/
public static double fullScale(int bitsPerSample) {
return pow(2.0, bitsPerSample - 1); // optimization: (1L << (bitsPerSample - 1))
}
private static long unpackBits(byte[] bytes,
int i,
boolean isBigEndian,
int bytesPerSample) {
switch (bytesPerSample) {
case 1: return unpack8Bit(bytes, i);
case 2: return unpack16Bit(bytes, i, isBigEndian);
case 3: return unpack24Bit(bytes, i, isBigEndian);
default: return unpackAnyBit(bytes, i, isBigEndian, bytesPerSample);
}
}
private static long unpack8Bit(byte[] bytes, int i) {
return bytes[i] & 0xffL;
}
private static long unpack16Bit(byte[] bytes,
int i,
boolean isBigEndian) {
if (isBigEndian) {
return (
((bytes[i ] & 0xffL) << 8)
| (bytes[i + 1] & 0xffL)
);
} else {
return (
(bytes[i ] & 0xffL)
| ((bytes[i + 1] & 0xffL) << 8)
);
}
}
private static long unpack24Bit(byte[] bytes,
int i,
boolean isBigEndian) {
if (isBigEndian) {
return (
((bytes[i ] & 0xffL) << 16)
| ((bytes[i + 1] & 0xffL) << 8)
| (bytes[i + 2] & 0xffL)
);
} else {
return (
(bytes[i ] & 0xffL)
| ((bytes[i + 1] & 0xffL) << 8)
| ((bytes[i + 2] & 0xffL) << 16)
);
}
}
private static long unpackAnyBit(byte[] bytes,
int i,
boolean isBigEndian,
int bytesPerSample) {
long temp = 0;
if (isBigEndian) {
for (int b = 0; b < bytesPerSample; b++) {
temp |= (bytes[i + b] & 0xffL) << (
8 * (bytesPerSample - b - 1)
);
}
} else {
for (int b = 0; b < bytesPerSample; b++) {
temp |= (bytes[i + b] & 0xffL) << (8 * b);
}
}
return temp;
}
private static void packBits(byte[] bytes,
int i,
long temp,
boolean isBigEndian,
int bytesPerSample) {
switch (bytesPerSample) {
case 1: pack8Bit(bytes, i, temp);
break;
case 2: pack16Bit(bytes, i, temp, isBigEndian);
break;
case 3: pack24Bit(bytes, i, temp, isBigEndian);
break;
default: packAnyBit(bytes, i, temp, isBigEndian, bytesPerSample);
break;
}
}
private static void pack8Bit(byte[] bytes, int i, long temp) {
bytes[i] = (byte) (temp & 0xffL);
}
private static void pack16Bit(byte[] bytes,
int i,
long temp,
boolean isBigEndian) {
if (isBigEndian) {
bytes[i ] = (byte) ((temp >>> 8) & 0xffL);
bytes[i + 1] = (byte) ( temp & 0xffL);
} else {
bytes[i ] = (byte) ( temp & 0xffL);
bytes[i + 1] = (byte) ((temp >>> 8) & 0xffL);
}
}
private static void pack24Bit(byte[] bytes,
int i,
long temp,
boolean isBigEndian) {
if (isBigEndian) {
bytes[i ] = (byte) ((temp >>> 16) & 0xffL);
bytes[i + 1] = (byte) ((temp >>> 8) & 0xffL);
bytes[i + 2] = (byte) ( temp & 0xffL);
} else {
bytes[i ] = (byte) ( temp & 0xffL);
bytes[i + 1] = (byte) ((temp >>> 8) & 0xffL);
bytes[i + 2] = (byte) ((temp >>> 16) & 0xffL);
}
}
private static void packAnyBit(byte[] bytes,
int i,
long temp,
boolean isBigEndian,
int bytesPerSample) {
if (isBigEndian) {
for (int b = 0; b < bytesPerSample; b++) {
bytes[i + b] = (byte) (
(temp >>> (8 * (bytesPerSample - b - 1))) & 0xffL
);
}
} else {
for (int b = 0; b < bytesPerSample; b++) {
bytes[i + b] = (byte) ((temp >>> (8 * b)) & 0xffL);
}
}
}
private static long extendSign(long temp, int bitsPerSample) {
int bitsToExtend = Long.SIZE - bitsPerSample;
return (temp << bitsToExtend) >> bitsToExtend;
}
private static long unsignedToSigned(long temp, int bitsPerSample) {
return temp - (long) fullScale(bitsPerSample);
}
private static long signedToUnsigned(long temp, int bitsPerSample) {
return temp + (long) fullScale(bitsPerSample);
}
// mu-law constant
private static final double MU = 255.0;
// A-law constant
private static final double A = 87.7;
// natural logarithm of A
private static final double LN_A = log(A);
private static float bitsToMuLaw(long temp) {
temp ^= 0xffL;
if ((temp & 0x80L) != 0) {
temp = -(temp ^ 0x80L);
}
float sample = (float) (temp / fullScale(8));
return (float) (
signum(sample)
*
(1.0 / MU)
*
(pow(1.0 + MU, abs(sample)) - 1.0)
);
}
private static long muLawToBits(float sample) {
double sign = signum(sample);
sample = abs(sample);
sample = (float) (
sign * (log(1.0 + (MU * sample)) / log(1.0 + MU))
);
long temp = (long) (sample * fullScale(8));
if (temp < 0) {
temp = -temp ^ 0x80L;
}
return temp ^ 0xffL;
}
private static float bitsToALaw(long temp) {
temp ^= 0x55L;
if ((temp & 0x80L) != 0) {
temp = -(temp ^ 0x80L);
}
float sample = (float) (temp / fullScale(8));
float sign = signum(sample);
sample = abs(sample);
if (sample < (1.0 / (1.0 + LN_A))) {
sample = (float) (sample * ((1.0 + LN_A) / A));
} else {
sample = (float) (exp((sample * (1.0 + LN_A)) - 1.0) / A);
}
return sign * sample;
}
private static long aLawToBits(float sample) {
double sign = signum(sample);
sample = abs(sample);
if (sample < (1.0 / A)) {
sample = (float) ((A * sample) / (1.0 + LN_A));
} else {
sample = (float) ((1.0 + log(A * sample)) / (1.0 + LN_A));
}
sample *= sign;
long temp = (long) (sample * fullScale(8));
if (temp < 0) {
temp = -temp ^ 0x80L;
}
return temp ^ 0x55L;
}
}
This is how you get the actual sample data from the currently playing sound. The other excellent answer will tell you what the data means. Haven't tried it on another OS than my Windows 10 machine YMMV. For me it pulls the current system default recording device. On Windows set it to "Stereo Mix" instead of "Microphone" to get playing sound. You may have to toggle "Show Disabled Devices" to see "Stereo Mix".
import javax.sound.sampled.*;
public class SampleAudio {
private static long extendSign(long temp, int bitsPerSample) {
int extensionBits = 64 - bitsPerSample;
return (temp << extensionBits) >> extensionBits;
}
public static void main(String[] args) throws LineUnavailableException {
float sampleRate = 8000;
int sampleSizeBits = 16;
int numChannels = 1; // Mono
AudioFormat format = new AudioFormat(sampleRate, sampleSizeBits, numChannels, true, true);
TargetDataLine tdl = AudioSystem.getTargetDataLine(format);
tdl.open(format);
tdl.start();
if (!tdl.isOpen()) {
System.exit(1);
}
byte[] data = new byte[(int)sampleRate*10];
int read = tdl.read(data, 0, (int)sampleRate*10);
if (read > 0) {
for (int i = 0; i < read-1; i = i + 2) {
long val = ((data[i] & 0xffL) << 8L) | (data[i + 1] & 0xffL);
long valf = extendSign(val, 16);
System.out.println(i + "\t" + valf);
}
}
tdl.close();
}
}
I got a WAV (32 bit sample size, 8 byte per frame, 44100 Hz, PCM_Float), which in need to create a sample array of. This is the code I have used for a Wav with 16 bit sample size, 4 byte per frame, 44100 Hz, PCM_Signed.
private float[] getSampleArray(byte[] eightBitByteArray) {
int newArrayLength = eightBitByteArray.length
/ (2 * calculateNumberOfChannels()) + 1;
float[] toReturn = new float[newArrayLength];
int index = 0;
for (int t = 0; t + 4 < eightBitByteArray.length; t += 2) // t+2 -> skip
//2nd channel
{
int low=((int) eightBitByteArray[t++]) & 0x00ff;
int high=((int) eightBitByteArray[t++]) << 8;
double value = Math.pow(low+high, 2);
double dB = 0;
if (value != 0) {
dB = 20.0 * Math.log10(value); // calculate decibel
}
toReturn[index] = getFloatValue(dB); //minorly important conversion
//to normalized values
index++;
}
return toReturn;
}
Obviously this code cant work for the 32bits sample size Wav, as I have to consider 2 more bytes in the first channel.
Does anybody know how the 2 other bytes have to be added (and shiftet) to calculate the amplitude? Unfortunately google didnt help me at all :/.
Thanks in advance.
Something like this should do the trick.
for (int t = 0; t + 4 < eightBitByteArray.length; t += 4) // t+4 -> skip
//2nd channel
{
float value = ByteBuffer.wrap(eightBitByteArray, t, 4).order(ByteOrder.LITTLE_ENDIAN).getFloat();
double dB = 0;
if (value != 0) {
dB = 20.0 * Math.log10(value); // calculate decibel
}
toReturn[index] = getFloatValue(dB); //minorly important conversion
//to normalized values
index++;
}
On another note - converting instantaneous samples to dB is nonsensical.
Alright, so I am working on creating an Android audio visualization app. The problem is, what I get form the method getFft() doesn't jive with what google says it should produce. I traced the source code all the way back to C++, but I am not familiar enough with C++ or FFT to actually understand what is happening.
I will try and include everything needed here:
(Java) Visualizer.getFft(byte[] fft)
/**
* Returns a frequency capture of currently playing audio content. The capture is a 8-bit
* magnitude FFT. Note that the size of the FFT is half of the specified capture size but both
* sides of the spectrum are returned yielding in a number of bytes equal to the capture size.
* {#see #getCaptureSize()}.
* <p>This method must be called when the Visualizer is enabled.
* #param fft array of bytes where the FFT should be returned
* #return {#link #SUCCESS} in case of success,
* {#link #ERROR_NO_MEMORY}, {#link #ERROR_INVALID_OPERATION} or {#link #ERROR_DEAD_OBJECT}
* in case of failure.
* #throws IllegalStateException
*/
public int getFft(byte[] fft)
throws IllegalStateException {
synchronized (mStateLock) {
if (mState != STATE_ENABLED) {
throw(new IllegalStateException("getFft() called in wrong state: "+mState));
}
return native_getFft(fft);
}
}
(C++) Visualizer.getFft(uint8_t *fft)
status_t Visualizer::getFft(uint8_t *fft)
{
if (fft == NULL) {
return BAD_VALUE;
}
if (mCaptureSize == 0) {
return NO_INIT;
}
status_t status = NO_ERROR;
if (mEnabled) {
uint8_t buf[mCaptureSize];
status = getWaveForm(buf);
if (status == NO_ERROR) {
status = doFft(fft, buf);
}
} else {
memset(fft, 0, mCaptureSize);
}
return status;
}
(C++) Visualizer.doFft(uint8_t *fft, uint8_t *waveform)
status_t Visualizer::doFft(uint8_t *fft, uint8_t *waveform)
{
int32_t workspace[mCaptureSize >> 1];
int32_t nonzero = 0;
for (uint32_t i = 0; i < mCaptureSize; i += 2) {
workspace[i >> 1] = (waveform[i] ^ 0x80) << 23;
workspace[i >> 1] |= (waveform[i + 1] ^ 0x80) << 7;
nonzero |= workspace[i >> 1];
}
if (nonzero) {
fixed_fft_real(mCaptureSize >> 1, workspace);
}
for (uint32_t i = 0; i < mCaptureSize; i += 2) {
fft[i] = workspace[i >> 1] >> 23;
fft[i + 1] = workspace[i >> 1] >> 7;
}
return NO_ERROR;
}
(C++) fixedfft.fixed_fft_real(int n, int32_t *v)
void fixed_fft_real(int n, int32_t *v)
{
int scale = LOG_FFT_SIZE, m = n >> 1, i;
fixed_fft(n, v);
for (i = 1; i <= n; i <<= 1, --scale);
v[0] = mult(~v[0], 0x80008000);
v[m] = half(v[m]);
for (i = 1; i < n >> 1; ++i) {
int32_t x = half(v[i]);
int32_t z = half(v[n - i]);
int32_t y = z - (x ^ 0xFFFF);
x = half(x + (z ^ 0xFFFF));
y = mult(y, twiddle[i << scale]);
v[i] = x - y;
v[n - i] = (x + y) ^ 0xFFFF;
}
}
(C++) fixedfft.fixed_fft(int n, int32_t *v)
void fixed_fft(int n, int32_t *v)
{
int scale = LOG_FFT_SIZE, i, p, r;
for (r = 0, i = 1; i < n; ++i) {
for (p = n; !(p & r); p >>= 1, r ^= p);
if (i < r) {
int32_t t = v[i];
v[i] = v[r];
v[r] = t;
}
}
for (p = 1; p < n; p <<= 1) {
--scale;
for (i = 0; i < n; i += p << 1) {
int32_t x = half(v[i]);
int32_t y = half(v[i + p]);
v[i] = x + y;
v[i + p] = x - y;
}
for (r = 1; r < p; ++r) {
int32_t w = MAX_FFT_SIZE / 4 - (r << scale);
i = w >> 31;
w = twiddle[(w ^ i) - i] ^ (i << 16);
for (i = r; i < n; i += p << 1) {
int32_t x = half(v[i]);
int32_t y = mult(w, v[i + p]);
v[i] = x - y;
v[i + p] = x + y;
}
}
}
}
If you made it through all that, you are awesome! So my issue, is when I call the java method getFft() I end up with negative values, which shouldn't exist if the returned array is meant to represent magnitude. So my question is, what do I need to do to make the array represent magnitude?
EDIT: It appears my data may actually be the Fourier coefficients. I was poking around the web and found this. The applet "Start Function FFT" displays a graphed representation of coefficients and it is a spitting image of what happens when I graph the data from getFft(). So new question: Is this what my data is? and if so, how do I go from the coefficients to a spectral analysis of it?
An FFT doesn't just produce magnitude; it produces phase as well (the output for each sample is a complex number). If you want magnitude, then you need to explicitly calculate it for each output sample, as re*re + im*im, where re and im are the real and imaginary components of each complex number, respectively.
Unfortunately, I can't see anywhere in your code where you're working with complex numbers, so perhaps some rewrite is required.
UPDATE
If I had to guess (after glancing at the code), I'd say that real components were at even indices, and odd components were at odd indices. So to get magnitudes, you'd need to do something like:
uint32_t mag[N/2];
for (int i = 0; i < N/2; i++)
{
mag[i] = fft[2*i]*fft[2*i] + fft[2*i+1]*fft[2*i+1];
}
One possible explanation why you see negative values: byte is a signed data type in Java. All values, that are greater or equal 1000 00002 are interpreted as negative integers.
If we know that all values should are expected to be in the range [0..255], then we have map the values to a larger type and filter the upper bits:
byte signedByte = 0xff; // = -1
short unsignedByte = ((short) signedByte) & 0xff; // = 255
"The capture is a 8-bit magnitude FFT" probably means that the return values have an 8-bit magnitude, not that they are magnitudes themselves.
According to Jason
For real-valued signals, like the ones
you have in audio processing, the
negative frequency output will be a
mirror image of the positive
frequencies.
Android 2.3 Visualizer - Trouble understanding getFft()