AudioTrack only playing noise instead of recorded voice - java

I want to play recorded voice using audio track but its making noise I tried different techniques but unable to solve this issue.
I Changed:
frequency rate, Audio Format Channel Audio Formate Encoding
public class PlayAudio extends AsyncTask<Void, Integer, Void> {
PlayAudio playTask;
String path = Environment.getExternalStorageDirectory().getAbsolutePath() + "/MyFolder/";
String myfile = path + "filename" + ".wav";
File recordingFile = new File(myfile);
boolean isRecording = false,isPlaying = false;
int frequency = 44100 ,channelConfiguration = AudioFormat.CHANNEL_CONFIGURATION_MONO;
int audioEncoding = AudioFormat.ENCODING_PCM_16BIT;
#Override
protected Void doInBackground(Void... params) {
isPlaying = true;
int bufferSize = AudioTrack.getMinBufferSize(frequency,channelConfiguration,audioEncoding);
short[] audiodata = new short[bufferSize / 4];
try {
DataInputStream dis = new DataInputStream(new BufferedInputStream(new FileInputStream(recordingFile)));
AudioTrack audioTrack = new AudioTrack(
AudioManager.STREAM_MUSIC, frequency,
channelConfiguration, audioEncoding, bufferSize,
AudioTrack.MODE_STREAM);
audioTrack.play();
while (isPlaying && dis.available() > 0) {
int i = 0;
while (dis.available() > 0 && i < audiodata.length) {
audiodata[i] = dis.readShort();
i++;
}
audioTrack.write(audiodata, 0, audiodata.length);
}
dis.close();
// startPlaybackButton.setEnabled(false);
// stopPlaybackButton.setEnabled(true);
} catch (Throwable t) {
Log.e("AudioTrack", "Playback Failed");
}
return null;
}
}

I don't know if this is the whole problem, but part of your problem is that you're treating the wav file as if all of it is audio data. In fact, there is a fair amount of meta-data in there. See http://soundfile.sapp.org/doc/WaveFormat/ for more information.
The safest thing to do is to parse the file until you find data block, then read the data block, and then stop (because often there's meta-data that comes after the data block too.
Here's some rough code to give you the idea.
try {
byte[] buffer = new byte[1024];
// First find the data chunk
byte[] bytes = new byte[4];
// Read first 4 bytes.
// (Should be RIFF descriptor.)
// Assume it's ok
is.read(bytes);
// First subchunk will always be at byte 12.
// (There is no other dependable constant.)
is.skip(8);
for (;;) {
// Read each chunk descriptor.
if (is.read(bytes) < 0) {
break;
}
String desc = new String(bytes, "US-ASCII");
// Read chunk length.
if (is.read(bytes) < 0) {
break;
}
int dataLength = (
(bytes[0] & 0xFF) |
((bytes[1] & 0xFF) << 8) |
((bytes[2] & 0xFF) << 16) |
((bytes[3] & 0xFF) << 24));
long length = getUnsignedInt(dataLength);
if (desc.equals("data")){
// Read 'length' bytes
...
public static long getUnsignedInt(int x) {
return x & 0x00000000ffffffffL;
}

Related

Sound class sounds layered and screechy on Windows

So, when I'm on Mac, this error did not occur. However, when I am on Windows, any sounds I play multiple times over each other start sounding like they are becoming screechy and layering over each other in an unpleasant way.
Here is relevant code from my Sound class:
public class NewerSound {
private boolean stop = true;
private boolean loopable;
private boolean isUrl;
private URL fileUrl;
private Thread sound;
private double volume = 1.0;
public NewerSound(URL url, boolean loopable) throws UnsupportedAudioFileException, IOException {
isUrl = true;
fileUrl = url;
this.loopable = loopable;
}
public void play() {
stop = false;
Runnable r = new Runnable() {
#Override
public void run() {
do {
try {
AudioInputStream in;
if(!isUrl)
in = getAudioInputStream(new File(fileName));
else
in = getAudioInputStream(fileUrl);
final AudioFormat outFormat = getOutFormat(in.getFormat());
final Info info = new Info(SourceDataLine.class, outFormat);
try(final SourceDataLine line = (SourceDataLine) AudioSystem.getLine(info)) {
if(line != null) {
line.open(outFormat);
line.start();
AudioInputStream inputMystream = AudioSystem.getAudioInputStream(outFormat, in);
stream(inputMystream, line);
line.drain();
line.stop();
}
}
}
catch(UnsupportedAudioFileException | LineUnavailableException | IOException e) {
throw new IllegalStateException(e);
}
} while(loopable && !stop);
}
};
sound = new Thread(r);
sound.start();
}
private AudioFormat getOutFormat(AudioFormat inFormat) {
final int ch = inFormat.getChannels();
final float rate = inFormat.getSampleRate();
return new AudioFormat(PCM_SIGNED, rate, 16, ch, ch * 2, rate, false);
}
private void stream(AudioInputStream in, SourceDataLine line) throws IOException {
byte[] buffer = new byte[4];
for(int n = 0; n != -1 && !stop; n = in.read(buffer, 0, buffer.length)) {
byte[] bufferTemp = new byte[buffer.length];
for(int i = 0; i < bufferTemp.length; i += 2) {
short audioSample = (short) ((short) ((buffer[i + 1] & 0xff) << 8) | (buffer[i] & 0xff));
audioSample = (short) (audioSample * volume);
bufferTemp[i] = (byte) audioSample;
bufferTemp[i + 1] = (byte) (audioSample >> 8);
}
buffer = bufferTemp;
line.write(buffer, 0, n);
}
}
}
It is possible that it could be an issue of accessing the same resources when playing the same sound multiple times over itself when I use the NewerSound.play() method.
Please let me know if any other details are needed. Much appreciated :)
The method you are using to change the volume in the method "stream" is flawed. you have 16-bit encoding, thus it takes two bytes to derive a single audio value. You need to assemble the value from the two byte pairs before the multiplication, then take apart the 16-bit result back into two bytes. There are a number of StackOverflow threads with code to do this.
I don't know if this is the whole reason for the problem you describe but it definitely could be, and definitely needs to be fixed.

Android, decode mp3, mix few audio and encode to pcm ( output are too fast)

I have question about Android decode mp3, mix few audio and encode to m4a (aac). For that I use Jlayer for android to decode mp3, audiotrack to play song, and MediaCodec with mediaformat to encode pcm. The problem is my output after encode is too fast for example: I should have 5 sec audio mix but instead I got ~ 1,5 sec. I thinking that I lose somewhere audio frames, but I dont sure about that. Thanks for answer.
(output file is ~ 25% faster that should be)
Decode mp3 code:
public void decodeMP3toPCM(Resources res, int resource) throws BitstreamException, DecoderException, IOException {
InputStream inputStream = new BufferedInputStream(res.openRawResource(resource), 1152);
Bitstream bitstream = new Bitstream(inputStream);
Decoder decoder = new Decoder();
boolean done = false;
while (!done) {
Header frameHeader = bitstream.readFrame();
if (frameHeader == null) {
done = true;
} else {
SampleBuffer output = (SampleBuffer) decoder.decodeFrame(frameHeader, bitstream);
mTimeCount += frameHeader.ms_per_frame();
short[] pcm = output.getBuffer();
mDataBuffer.addFrame(mViewId, pcm);
mReadedFrames++;
mAudioTrack.write(pcm, 0, pcm.length);
}
bitstream.closeFrame();
}
inputStream.close();
}
encode:
public class AudioEncoder {
private MediaCodec mediaCodec;
private BufferedOutputStream outputStream;
private String mediaType = "audio/mp4a-latm";
public AudioEncoder(String filePath) throws IOException {
File f = new File(filePath);
touch(f);
try {
outputStream = new BufferedOutputStream(new FileOutputStream(f));
} catch (Exception e) {
e.printStackTrace();
}
try {
//mediaCodec = MediaCodec.createEncoderByType(mediaType);
mediaCodec = MediaCodec.createByCodecName("OMX.google.aac.encoder");
} catch (IOException e) {
e.printStackTrace();
}
mediaCodec = MediaCodec.createEncoderByType(mediaType);
final int kSampleRates[] = { 8000, 11025, 22050, 44100, 48000 };
final int kBitRates[] = { 64000, 128000 };
MediaFormat mediaFormat = MediaFormat.createAudioFormat(mediaType,kSampleRates[3],2);
mediaFormat.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectLC);
mediaFormat.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 4608);
mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, kBitRates[1]);
mediaCodec.configure(mediaFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
mediaCodec.start();
}
public void close() {
try {
mediaCodec.stop();
mediaCodec.release();
outputStream.flush();
outputStream.close();
} catch (Exception e) {
e.printStackTrace();
}
}
public synchronized void offerEncoder(byte[] input) {
Log.e("synchro ", input.length + " is coming");
try {
ByteBuffer[] inputBuffers = mediaCodec.getInputBuffers();
ByteBuffer[] outputBuffers = mediaCodec.getOutputBuffers();
int inputBufferIndex = mediaCodec.dequeueInputBuffer(-1);
if (inputBufferIndex >= 0) {
ByteBuffer inputBuffer = inputBuffers[inputBufferIndex];
inputBuffer.clear();
inputBuffer.put(input);
mediaCodec.queueInputBuffer(inputBufferIndex, 0, input.length, 0, 0);
}
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
int outputBufferIndex = mediaCodec.dequeueOutputBuffer(bufferInfo, 0);
while (outputBufferIndex >= 0) {
int outBitsSize = bufferInfo.size;
int outPacketSize = outBitsSize + 7; // 7 is ADTS size
ByteBuffer outputBuffer = outputBuffers[outputBufferIndex];
outputBuffer.position(bufferInfo.offset);
outputBuffer.limit(bufferInfo.offset + outBitsSize);
byte[] outData = new byte[outPacketSize];
addADTStoPacket(outData, outPacketSize);
outputBuffer.get(outData, 7, outBitsSize);
outputBuffer.position(bufferInfo.offset);
outputStream.write(outData, 0, outData.length);
mediaCodec.releaseOutputBuffer(outputBufferIndex, false);
outputBufferIndex = mediaCodec.dequeueOutputBuffer(bufferInfo, 0);
}
} catch (Throwable t) {
t.printStackTrace();
}
}
private void addADTStoPacket(byte[] packet, int packetLen) {
int profile = 2; //AAC LC
//39=MediaCodecInfo.CodecProfileLevel.AACObjectELD;
int freqIdx = 4; //44.1KHz
int chanCfg = 2; //CPE
// fill in ADTS data
packet[0] = (byte) 0xFF;
packet[1] = (byte) 0xF9;
packet[2] = (byte) (((profile - 1) << 6) + (freqIdx << 2) + (chanCfg >> 2));
packet[3] = (byte) (((chanCfg & 3) << 6) + (packetLen >> 11));
packet[4] = (byte) ((packetLen & 0x7FF) >> 3);
packet[5] = (byte) (((packetLen & 7) << 5) + 0x1F);
packet[6] = (byte) 0xFC;
}
public void touch(File f) {
try {
if (!f.exists())
f.createNewFile();
} catch (IOException e) {
e.printStackTrace();
}
}
}

Audio streaming via TCP socket on Android

I am streaming mic input from a C Server via socket. I know the stream works because it does with a C client and I am getting the right values on my Android client.
I am streaming a 1024 floatarray. One float are 4 bytes. So I got a incoming stream with 4096 bytes per frame. I am getting the floats out of this bytes and I know this floats are the ones I sent, so that part should work.
Now I want to get that stream directly to the phones speakers by using AudioTrack. I tried to input the bytes I received directly: just noise. I tried to cast it back to a byte array, still the same. I tried to cast that float into short (because AudioTrack takes bytes or short). I could get something that could have been my mic input (knocking), but very scratchy and and extremely laggy. I would understand if there was a lag between the frames, but I can't even get one clear sound.
I can, however, output a sin sound clearly that I produce locally and put into that shortarray.
Now I wonder if I got some issues in my code anyone of you can see, because I don't see them.
What I am doing is: I put 4 bytes in a byte array. I get the float out of it. As soon as I got one Frame in my float array (I am controlling that with a bool, not nice, but it should work) I put it in my shortarray and let audiotrack play it. This double casting might be slow, but I do it because its the closest I got to playing the actual input.
Edit:
I checked the endianess by comparing the floats, they have the proper values between -1 and 1 and are the same ones I send. Since I don't change the endianess when casting to float, I don't get why forwarding a 4096 byte array to AudioTrack directly doesn't work neither. There might be something wrong with the multithreading, but I don't see what it could be.
Edit 2: I discovered a minor problem - I reset j at 1023. But that missing float should not have been the problem. What I did other than that was to put the method that took the stream from the socket in another thread instead of calling it in a async task. That made it work, I now am able to understand the mic sounds. Still the quality is very poor - might there be a reason for that in the code? Also I got a delay of about 10 seconds. Only about half a second is caused by WLAN, so I wonder if it might be the codes fault. Any further thoughts are appreciated.
Edit 3: I played around with the code and implemented a few of greenapps ideas in the comments. With the new thread structure I was facing the problem of not getting any sound. Like at all. I don't get how that is even possible, so I switched back. Other things I tried to make the threads more lightweight didn't have any effect. I got a delay and I got a very poor quality (I can identify knocks, but I can't understand voices). I figured something might be wrong with my convertions, so I put the bytes I receive from the socket directly in AudioTrack - nothing but ugly pulsing static noise. Now I am even more confused, since this exact stream still works with the C client. I will report back if I find a solution, but still any help is welcome.
Edit 4 I should add, that I can play mic inputs from another android app where I send that input directly as bytes (I would exclude the float casting stuff and put the bytes I receive directly to audioTrack in my player code).
Also it occured to me, that it could be a problem, that the said floatarray that is streamed by the C Server comes from a 64bit machine while the phone is 32bit. Could that be a problem somehow, even though I am just streaming floats as 4 bytes?
Or, another thought of mine: The underlying number format of the bytes I receive is float. What format does AudioTrack expect? Even if put in just bytes - would I need to cast that float to a int and cast that back to bytes or something?
new code:
public class PCMSocket {
AudioTrack audioTrack;
boolean doStop = false;
int musicLength = 4096;
byte[] music;
Socket socket;
short[] buffer = new short[4096];
float[] fmusic = new float[1024];
WriteToAudio writeThread;
ReadFromSocket readThread;
public PCMSocket()
{
}
public void start()
{
doStop = false;
readThread = new ReadFromSocket();
readThread.start();
}
public class ReadFromSocket extends Thread
{
public void run()
{
doStop=true;
InetSocketAddress address = new InetSocketAddress("xxx.xxx.xxx.x", 8000);
socket = new Socket();
int timeout = 6000;
try {
socket.connect(address, timeout);
} catch (IOException e2) {
e2.printStackTrace();
}
musicLength = 1024;
InputStream is = null;
try {
is = socket.getInputStream();
} catch (IOException e) {
e.printStackTrace();
}
BufferedInputStream bis = new BufferedInputStream(is);
DataInputStream dis = new DataInputStream(bis);
try{
int minSize =AudioTrack.getMinBufferSize( 44100, AudioFormat.CHANNEL_CONFIGURATION_STEREO, AudioFormat.ENCODING_PCM_16BIT );
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 44100,
AudioFormat.CHANNEL_OUT_STEREO,
AudioFormat.ENCODING_PCM_16BIT, minSize,
AudioTrack.MODE_STREAM);
audioTrack.play();
} catch (Throwable t)
{
t.printStackTrace();
doStop = true;
}
writeThread = new WriteToAudio();
readThread.start();
int i = 0;
int j=0;
try {
if(dis.available()>0)Log.d("PCMSocket", "receiving");
music = new byte[4];
while (dis.available() > 0)
{
music[i]=0;
music[i] = dis.readByte();
if(i==3)
{
int asInt = 0;
asInt = ((music[0] & 0xFF) << 0)
| ((music[1] & 0xFF) << 8)
| ((music[2] & 0xFF) << 16)
| ((music[3] & 0xFF) << 24);
float asFloat = 0;
asFloat = Float.intBitsToFloat(asInt);
fmusic[j]=asFloat;
}
i++;
j++;
if(i==4)
{
music = new byte[4];
i=0;
}
if(j==1024)
{
j=0;
if(doStop)doStop=false;
}
}
} catch (IOException e) {
e.printStackTrace();
}
try {
dis.close();
} catch (IOException e) {
e.printStackTrace();
}
}
};
public class WriteToAudio extends Thread
{
public void run()
{
while(true){
while(!doStop)
{
try{
writeSamples(fmusic);
}catch(Exception e)
{
e.printStackTrace();
}
doStop = true;
}
}
}
};
public void writeSamples(float[] samples)
{
fillBuffer( samples );
audioTrack.write( buffer, 0, samples.length );
}
private void fillBuffer( float[] samples )
{
if( buffer.length < samples.length )
buffer = new short[samples.length];
for( int i = 0; i < samples.length; i++ )
{
buffer[i] = (short)(samples[i] * Short.MAX_VALUE);
}
}
}
old code:
public class PCMSocket {
AudioTrack audioTrack;
WriteToAudio thread;
boolean doStop = false;
int musicLength = 4096;
byte[] music;
Socket socket;
short[] buffer = new short[4096];
float[] fmusic = new float[1024];
public PCMSocket()
{
}
public void start()
{
doStop = false;
new GetStream().executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR);
}
private class GetStream extends AsyncTask<Void, Void, Void> {
#Override
protected Void doInBackground(Void... values) {
PCMSocket.this.getSocket();
return null;
}
#Override
protected void onPreExecute() {
}
#Override
protected void onPostExecute(Void result)
{
return;
}
#Override
protected void onProgressUpdate(Void... values) {
}
}
private void getSocket()
{
doStop=true;
InetSocketAddress address = new InetSocketAddress("xxx.xxx.xxx.x", 8000);
socket = new Socket();
int timeout = 6000;
try {
socket.connect(address, timeout);
} catch (IOException e2) {
e2.printStackTrace();
}
musicLength = 1024;
InputStream is = null;
try {
is = socket.getInputStream();
} catch (IOException e) {
e.printStackTrace();
}
BufferedInputStream bis = new BufferedInputStream(is);
DataInputStream dis = new DataInputStream(bis);
try{
int minSize =AudioTrack.getMinBufferSize( 44100, AudioFormat.CHANNEL_CONFIGURATION_STEREO, AudioFormat.ENCODING_PCM_16BIT );
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 44100,
AudioFormat.CHANNEL_OUT_STEREO,
AudioFormat.ENCODING_PCM_16BIT, minSize,
AudioTrack.MODE_STREAM);
audioTrack.play();
} catch (Throwable t)
{
t.printStackTrace();
doStop = true;
}
thread = new WriteToAudio();
thread.start();
int i = 0;
int j=0;
try {
if(dis.available()>0)Log.d("PCMSocket", "receiving");
music = new byte[4];
while (dis.available() > 0)
{
music[i]=0;
music[i] = dis.readByte();
if(i==3)
{
int asInt = 0;
asInt = ((music[0] & 0xFF) << 0)
| ((music[1] & 0xFF) << 8)
| ((music[2] & 0xFF) << 16)
| ((music[3] & 0xFF) << 24);
float asFloat = 0;
asFloat = Float.intBitsToFloat(asInt);
fmusic[j]=asFloat;
}
i++;
j++;
if(i==4)
{
music = new byte[4];
i=0;
}
if(j==1023)
{
j=0;
if(doStop)doStop=false;
}
}
} catch (IOException e) {
e.printStackTrace();
}
try {
dis.close();
} catch (IOException e) {
e.printStackTrace();
}
}
public class WriteToAudio extends Thread
{
public void run()
{
while(true){
while(!doStop)
{
try{
writeSamples(fmusic);
}catch(Exception e)
{
e.printStackTrace();
}
doStop = true;
}
}
}
};
public void writeSamples(float[] samples)
{
fillBuffer( samples );
audioTrack.write( buffer, 0, samples.length );
}
private void fillBuffer( float[] samples )
{
if( buffer.length < samples.length )
buffer = new short[samples.length*4];
for( int i = 0; i < samples.length; i++ )
{
buffer[i] = (short)(samples[i] * Short.MAX_VALUE);
}
}
}
Sooo...I just solved this only hours after I desperatly put bounty on it, but thats worth it.
I decided to start over. For the design thing with threads etc. I took some help from this awesome project, it helped me a lot. Now I use only one thread. It seems like the main point was the casting stuff, but I am not too sure, it also may have been the multithreading. I don't know what kind of bytes the byte[] constructor of AudioTracker expects, but certainly no float bytes. So I knew I need to use the short[] constructor. What I did was
-put the bytes in a byte[]
-take 4 of them and cast them to a float in a loop
-take each float and cast them to shorts
Since I already did that before, I am not too sure what the problem was. But now it works.
I hope this can help someone who wents trough the same pain as me. Big thanks to all of you who participated and commented.
Edit: I just thought about the changes and figured that me using CHANNEL_CONFIGURATION_STEREO instead of MONO earlier has contributed a lot to the stuttering. So you might want to try that one first if you encounter this problem. Still for me it was only a part of the solution, changing just that didn't help.
static final int frequency = 44100;
static final int channelConfiguration = AudioFormat.CHANNEL_CONFIGURATION_MONO;
static final int audioEncoding = AudioFormat.ENCODING_PCM_16BIT;
boolean isPlaying;
int playBufSize;
Socket socket;
AudioTrack audioTrack;
playBufSize=AudioTrack.getMinBufferSize(frequency, channelConfiguration, audioEncoding);
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, frequency, channelConfiguration, audioEncoding, playBufSize, AudioTrack.MODE_STREAM);
new Thread() {
byte[] buffer = new byte[4096];
public void run() {
try {
socket = new Socket(ip, port);
}
catch (Exception e) {
e.printStackTrace();
}
audioTrack.play();
isPlaying = true;
while (isPlaying) {
int readSize = 0;
try { readSize = socket.getInputStream().read(buffer); }
catch (Exception e) {
e.printStackTrace();
}
short[] sbuffer = new short[1024];
for(int i = 0; i < buffer.length; i++)
{
int asInt = 0;
asInt = ((buffer[i] & 0xFF) << 0)
| ((buffer[i+1] & 0xFF) << 8)
| ((buffer[i+2] & 0xFF) << 16)
| ((buffer[i+3] & 0xFF) << 24);
float asFloat = 0;
asFloat = Float.intBitsToFloat(asInt);
int k=0;
try{k = i/4;}catch(Exception e){}
sbuffer[k] = (short)(asFloat * Short.MAX_VALUE);
i=i+3;
}
audioTrack.write(sbuffer, 0, sbuffer.length);
}
audioTrack.stop();
try { socket.close(); }
catch (Exception e) { e.printStackTrace(); }
}
}.start();
Get rid of all, all, the available() tests. Just let your code block in the following read() statement(s). You don't have anything better to do anyway, and you're just burning potentially valuable CPU cycles by even trying to avoid the block.
EDIT To be specific:
try {
socket.connect(address, timeout);
} catch (IOException e2) {
e2.printStackTrace();
}
Poor practice to catch this exception and allow the following code to continue as though it hadn't happened. The exception should be allowed to propagate to the caller.
try {
is = socket.getInputStream();
} catch (IOException e) {
e.printStackTrace();
}
Ditto.
try {
if(dis.available()>0)Log.d("PCMSocket", "receiving");
Remove. You're receiving anyway.
music = new byte[4];
while (dis.available() > 0)
Pointless. Remove. The following reads will block.
{
music[i]=0;
Pointless. Remove.
music[i] = dis.readByte();
if(i==3)
{
int asInt = 0;
asInt = ((music[0] & 0xFF) << 0)
| ((music[1] & 0xFF) << 8)
| ((music[2] & 0xFF) << 16)
| ((music[3] & 0xFF) << 24);
This is all pointless. Replace it all with short asInt = dis.readInt();.
float asFloat = 0;
asFloat = Float.intBitsToFloat(asInt);
Given that the original conversion to short was via floatValue * Short.MAX_VALUE, this conversion should be asFloat = (float)asInt/Short.MAX_VALUE.
if(i==4)
If i was 3 before it will be 4 now, so this test is also pointless.
music = new byte[4];
You don't need to reallocate music. Remove.
} catch (IOException e) {
e.printStackTrace();
}
See above. Pointless. The exception should be allowed to propagate to the caller.
try {
dis.close();
} catch (IOException e) {
e.printStackTrace();
}
All this should be in a finally block.
}
};
while(true){
while(!doStop)
You don't need both these loops.
try{
writeSamples(fmusic);
}catch(Exception e)
{
e.printStackTrace();
}
See above. Pointless. The exception should in this case terminate the loop, as any IOException writing to a socket is fatal to the connection.
if( buffer.length < samples.length )
buffer = new short[samples.length];
Why isn't buffer already the right size? Alternatively, what if buffer.length > samples.length?

Split Wave audio file at silence [duplicate]

How can I detect silence when recording operation is started in Java? What is PCM data? How can I calculate PCM data in Java?
I found the solution :
package bemukan.voiceRecognition.speechToText;
import javax.sound.sampled.*;
import java.io.*;
public class RecordAudio {
private File audioFile;
protected boolean running;
private ByteArrayOutputStream out;
private AudioInputStream inputStream;
final static float MAX_8_BITS_SIGNED = Byte.MAX_VALUE;
final static float MAX_8_BITS_UNSIGNED = 0xff;
final static float MAX_16_BITS_SIGNED = Short.MAX_VALUE;
final static float MAX_16_BITS_UNSIGNED = 0xffff;
private AudioFormat format;
private float level;
private int frameSize;
public RecordAudio(){
getFormat();
}
private AudioFormat getFormat() {
File file = new File("src/Facebook/1.wav");
AudioInputStream stream;
try {
stream = AudioSystem.getAudioInputStream(file);
format=stream.getFormat();
frameSize=stream.getFormat().getFrameSize();
return stream.getFormat();
} catch (UnsupportedAudioFileException e) {
} catch (IOException e) {
}
return null;
}
public void stopAudio() {
running = false;
}
public void recordAudio() {
try {
final AudioFormat format = getFormat();
DataLine.Info info = new DataLine.Info(
TargetDataLine.class, format);
final TargetDataLine line = (TargetDataLine)
AudioSystem.getLine(info);
line.open(format);
line.start();
Runnable runner = new Runnable() {
int bufferSize = (int) format.getSampleRate()
* format.getFrameSize();
byte buffer[] = new byte[bufferSize];
public void run() {
int readPoint = 0;
out = new ByteArrayOutputStream();
running = true;
int sum=0;
while (running) {
int count =
line.read(buffer, 0, buffer.length);
calculateLevel(buffer,0,0);
System.out.println(level);
if (count > 0) {
out.write(buffer, 0, count);
}
}
line.stop();
}
};
Thread captureThread = new Thread(runner);
captureThread.start();
} catch (LineUnavailableException e) {
System.err.println("Line unavailable: " + e);
System.exit(-2);
}
}
public File getAudioFile() {
byte[] audio = out.toByteArray();
InputStream input = new ByteArrayInputStream(audio);
try {
final AudioFormat format = getFormat();
final AudioInputStream ais =
new AudioInputStream(input, format,
audio.length / format.getFrameSize());
AudioSystem.write(ais, AudioFileFormat.Type.WAVE, new File("temp.wav"));
input.close();
System.out.println("New file created!");
} catch (IOException e) {
System.out.println(e.getMessage());
}
return new File("temp.wav");
}
private void calculateLevel (byte[] buffer,
int readPoint,
int leftOver) {
int max = 0;
boolean use16Bit = (format.getSampleSizeInBits() == 16);
boolean signed = (format.getEncoding() ==
AudioFormat.Encoding.PCM_SIGNED);
boolean bigEndian = (format.isBigEndian());
if (use16Bit) {
for (int i=readPoint; i<buffer.length-leftOver; i+=2) {
int value = 0;
// deal with endianness
int hiByte = (bigEndian ? buffer[i] : buffer[i+1]);
int loByte = (bigEndian ? buffer[i+1] : buffer [i]);
if (signed) {
short shortVal = (short) hiByte;
shortVal = (short) ((shortVal << 8) | (byte) loByte);
value = shortVal;
} else {
value = (hiByte << 8) | loByte;
}
max = Math.max(max, value);
} // for
} else {
// 8 bit - no endianness issues, just sign
for (int i=readPoint; i<buffer.length-leftOver; i++) {
int value = 0;
if (signed) {
value = buffer [i];
} else {
short shortVal = 0;
shortVal = (short) (shortVal | buffer [i]);
value = shortVal;
}
max = Math.max (max, value);
} // for
} // 8 bit
// express max as float of 0.0 to 1.0 of max value
// of 8 or 16 bits (signed or unsigned)
if (signed) {
if (use16Bit) { level = (float) max / MAX_16_BITS_SIGNED; }
else { level = (float) max / MAX_8_BITS_SIGNED; }
} else {
if (use16Bit) { level = (float) max / MAX_16_BITS_UNSIGNED; }
else { level = (float) max / MAX_8_BITS_UNSIGNED; }
}
} // calculateLevel
}
How can I detect silence when recording operation is started in Java?
Calculate the dB or RMS value for a group of sound frames and decide at what level it is considered to be 'silence'.
What is PCM data?
Data that is in Pulse-code modulation format.
How can I calculate PCM data in Java?
I do not understand that question. But guessing it has something to do with the speech-recognition tag, I have some bad news. This might theoretically be done using the Java Speech API. But there are apparently no 'speech to text' implementations available for the API (only 'text to speech').
I have to calculate rms for speech-recognition project. But I do not know how can I calculate in Java.
For a single channel that is represented by signal sizes in a double ranging from -1 to 1, you might use this method.
/** Computes the RMS volume of a group of signal sizes ranging from -1 to 1. */
public double volumeRMS(double[] raw) {
double sum = 0d;
if (raw.length==0) {
return sum;
} else {
for (int ii=0; ii<raw.length; ii++) {
sum += raw[ii];
}
}
double average = sum/raw.length;
double sumMeanSquare = 0d;
for (int ii=0; ii<raw.length; ii++) {
sumMeanSquare += Math.pow(raw[ii]-average,2d);
}
double averageMeanSquare = sumMeanSquare/raw.length;
double rootMeanSquare = Math.sqrt(averageMeanSquare);
return rootMeanSquare;
}
There is a byte buffer to save input values from the line, and what I should have to do with this buffer?
If using the volumeRMS(double[]) method, convert the byte values to an array of double values ranging from -1 to 1. ;)
You need to catch the value like a number silence is zero or near
Please adapt your code to your requirement!!!
In this case a variable named UMBRAL (Threshold in spanish)...
Suppose that you have access to WAV file like bytes ByteHeader...
private Integer Byte2PosIntBig(byte Byte24, byte Byte16, byte Byte08, byte Byte00) {
return new Integer (
((Byte24) << 24)|
((Byte16 & 0xFF) << 16)|
((Byte08 & 0xFF) << 8)|
((Byte00 & 0xFF) << 0));
}
Before ....
RandomAccessFile RAFSource = new RandomAccessFile("your old file wav", "r");
Begins here...
int PSData = 44;
byte[] Bytes = new byte[4];
byte[] ByteHeader = new byte[44];
RAFSource.seek(0);
RAFSource.read(ByteHeader);
int WavSize = Byte2PosIntBig(ByteHeader[43],ByteHeader[42],ByteHeader[41],ByteHeader[40]);
int NumBits = Byte2PosIntBig(ByteHeader[35],ByteHeader[34]);
int NumByte = NumBits/8;
for (int i = PSData;i < PSData+WavSize;i+=NumByte) {
int WavSample = 0;
int WavResultI =0;
int WavResultO = 0;
if (NumByte == 2) {
RAFSource.seek(i);
Bytes[0] = RAFSource.readByte();
Bytes[1] = RAFSource.readByte();
WavSample = (int)(((Bytes[1]) << 8)|((Bytes[0] & 0xFF) << 0));
if (Math.abs(WavSample) < UMBRAL) {
//SILENCE DETECTED!!!
}
} else {
RAFSource.seek(i);
WavSample = (short)(RAFSource.readByte() & 0xFF);
short sSamT = (short)WavSample;
sSamT += 128;
double dSamD = (double)sSamT*Multiplier;
if ((double)sSamT < UMBRAL) {
//SILENCE DETECTED!!!
}
}

Encrypt byte array using vigenere cipher in java

I have to encrypt some file (jpg) using vigenere cipher. I wrote some code, but after encryption and decryption my file is corrupted. The first 1/4 of image displays okay, but the rest of it is corrupted. Here is my code:
#Override
public byte[] encryptFile(byte[] file, String key) {
char[] keyChars = key.toCharArray();
byte[] bytes = file;
for (int i = 0; i < file.length; i++) {
int keyNR = keyChars[i % keyChars.length] - 32;
int c = bytes[i] & 255;
if ((c >= 32) && (c <= 127)) {
int x = c - 32;
x = (x + keyNR) % 96;
bytes[i] = (byte) (x + 32);
}
}
return bytes;
}
#Override
public byte[] decryptFile(byte[] file, String key) {
char[] keyChars = key.toCharArray();
byte[] bytes = file;
for (int i = 0; i < file.length; i++) {
int keyNR = keyChars[i % keyChars.length] - 32;
int c = bytes[i] & 255;
if ((c >= 32) && (c <= 127)) {
int x = c - 32;
x = (x - keyNR + 96) % 96;
bytes[i] = (byte) (x + 32);
}
}
return bytes;
}
What did I do wrong?
EDIT:
reading and writing to file:
public void sendFile(String selectedFile, ICipher cipher, String key) {
try {
DataOutputStream outStream = new DataOutputStream(client
.getOutputStream());
outStream.flush();
File file = new File(selectedFile);
FileInputStream fileStream = new FileInputStream(file);
long fileSize = file.length();
long completed = 0;
long bytesLeft = fileSize - completed;
String msg = "SENDING_FILE:" + file.getName() + ":" + fileSize;
outStream.writeUTF(cipher.encryptMsg(msg, key));
while (completed < fileSize) {
int step = (int) (bytesLeft > 150000 ? 150000 : bytesLeft);
byte[] buffer = new byte[step];
fileStream.read(buffer);
buffer = cipher.encryptFile(buffer, key);
outStream.write(buffer);
completed += step;
bytesLeft = fileSize - completed;
}
outStream.writeUTF(cipher.encryptMsg("SEND_COMPLETE", key));
fileStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
private void downloadFile(String fileName, int fileSize,DataInputStream input,ICipher cipher, String key) {
try {
FileOutputStream outStream = new FileOutputStream("C:\\" + fileName);
int bytesRead = 0, counter = 0;
while (counter < fileSize) {
int step = (int) (fileSize > 150000 ? 150000 : fileSize);
byte[] buffer = new byte[step];
bytesRead = input.read(buffer);
if (bytesRead >= 0) {
buffer = cipher.decryptFile(buffer, key);
outStream.write(buffer, 0, bytesRead);
counter += bytesRead;
}
if (bytesRead < 1024) {
outStream.flush();
break;
}
}
Display.getDefault().syncExec(new Runnable() {
#Override
public void run() {
window.handleMessage("Download sucessfully");
}
});
outStream.close();
} catch (Exception e) {
Display.getDefault().syncExec(new Runnable() {
#Override
public void run() {
window.handleMessage("Error on downloading file!");
}
});
}
}
You encode the file in whatever chunks come from the disk I/O:
int step = (int) (bytesLeft > 150000 ? 150000 : bytesLeft);
byte[] buffer = new byte[step];
fileStream.read(buffer);
buffer = cipher.encryptFile(buffer, key);
But you decode the file in whatever chunks come from the network I/O:
bytesRead = input.read(buffer);
if (bytesRead >= 0) {
buffer = cipher.decryptFile(buffer, key);
outStream.write(buffer, 0, bytesRead);
counter += bytesRead;
}
These chunks are likely to disagree. The disk I/O may always give you full chunks (lucky for you), but the network I/O will likely give you packet-sized chunks (1500 bytes minus header).
The cipher should get an offset into the already encoded/decoded data (or encode/decode everything at once), and use that to shift the key appropriately, or this may happen:
original: ...LOREM IPSUM...
key : ...abCde abCde...
encoded : ...MQUIR JRVYR...
key : ...abCde Cdeab... <<note the key got shifted
decoded : ...LOREM GNQXP... <<output wrong after the first chunk.
Since the packet data size is (for Ethernet-sized TCP/IP packets) aligned at four bytes, a key of length four is likely to be always aligned.
another issue is that you are ignoring the number of bytes read from disk when uploading the file. While disk I/O is likely to always give you full-sized chunks (the file's likely to be memory-mapped or the underlying native API does provide this guarantee), nothing's taken for granted. Always use the amount of bytes actually read: bytesRead = fileStream.read(buffer);

Categories