Extreme delay playing a audio stream from ipcam - java

(sorry, not an english speaker, expect lots of grammatical/syntactical error)
I'm developing a piece of software to manage D-Link Ip Cam (DCS-xxxx series and other). Because this camera expose an audio stream (some model even have a speaker for bidirectional communication), i would like to play it at user request.
All entry point are behind a http basic authentication (but weirdly enough i cant use http:\USER:PASS#192.168.1.100, because i get a 401).
I use the javax.sound.* package to do that, but for some reason the audio start playing after 15 to 20 seconds, with a total delay of 30-40 seconds EDIT: 45 seconds in average, but the audio is played from the beginning, so its even worse.
This is the class (bare minimum, just for testing purpose)
import java.io.IOException;
import java.net.Authenticator;
import java.net.MalformedURLException;
import java.net.PasswordAuthentication;
import java.net.URL;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.Clip;
import javax.sound.sampled.LineUnavailableException;
import javax.sound.sampled.UnsupportedAudioFileException;
public class AudioPlayer implements Runnable{
private URL URL;
private String USERNAME;
private String PASSWORD;
private volatile boolean stop = false;
public AudioPlayer(String url, String user, String pass) throws MalformedURLException{
this.URL = new URL(url);
this.USERNAME = user;
this.PASSWORD = pass;
}
public void shutdown() {
stop = true;
}
#Override
public void run() {
Authenticator.setDefault (new Authenticator() {
protected PasswordAuthentication getPasswordAuthentication() {
return new PasswordAuthentication (USERNAME, PASSWORD.toCharArray());
}
});
try {
Clip clip = AudioSystem.getClip();
AudioInputStream inputStream = AudioSystem.getAudioInputStream(URL);
clip.open(inputStream);
clip.start();
while(!stop && clip.isRunning()) {}
clip.stop();
System.err.println("AUDIO PLAYER STOPPED");
} catch (LineUnavailableException | IOException | UnsupportedAudioFileException e) {
e.printStackTrace();
}
}
}
The Authenticator part is needed because ipcam use basic http autentication.
I've read somewhere that the AudioSystem make several pass with different algotithm to get the right one, then will reset the stream to the beginning and only then start to play.
So, because of this, maybe AudioSystem have some problem to realize what type of codec to use (maybe need some kind of header) and spent quite some time before start playing the audio.
It's worth to know that even VLC struggle to keep up with the streaming, losing up to 8 seconds before playing (8 seconds is way better than 20).
The IpCam is on a local network.
There is something wrong with my code? Some method i don't see?
Really don't know where to look with this one.
I was unable to find any meaningful answer here or elsewhere.

After fiddling with one answer, i've found the solution, wich provide 1 to 2 seconds delay (wich is the same delay of the official app or the web page configuration, so pretty much perfect).
private void playStreamedURL() throws IOException {
//to avoid 401 error
Authenticator.setDefault (new Authenticator() {
protected PasswordAuthentication getPasswordAuthentication() {
//USERNAME and PASSWORD are defined in the class
return new PasswordAuthentication (USERNAME, PASSWORD.toCharArray());
}
});
AudioInputStream AIS = null;
SourceDataLine line = null;
try {
//get the input stream
AIS = AudioSystem.getAudioInputStream(this.URL);
//get the format, Very Important!
AudioFormat format = AIS.getFormat();
DataLine.Info info = new DataLine.Info(SourceDataLine.class, format);
//create the output line
line = (SourceDataLine) AudioSystem.getLine(info);
//open the line with the specified format (other solution manually create the format
//and thats is a big problem because things like sampleRate aren't standard
//For example, the IpCam i use for testing use 11205 as sample rate.
line.open(format);
int framesize = format.getFrameSize();
//NOT_SPECIFIED is -1, wich create problem with the buffer definition, so it's revalued if necessary
if(framesize == AudioSystem.NOT_SPECIFIED)
framesize = 1;
//the buffer used to read and write bytes from stream to audio line
byte[] buffer = new byte[4 * 1024 * framesize];
int total = 0;
boolean playing = false;
int r, towrite, remaining;
while( (r = AIS.read(buffer, total, buffer.length - total)) >= 0 ) { //or !=-1
total += r;
//avoid start the line more than one time
if (!playing) {
line.start();
playing = true;
}
//actually play the sound (the frames in the buffer)
towrite = (total / framesize) * framesize;
line.write(buffer, 0, towrite);
//if some byte remain, overwrite them into the buffer and change the total
remaining = total - towrite;
if (remaining > 0)
System.arraycopy(buffer, towrite, buffer, 0, remaining);
total = remaining;
}
//line.drain() can be used, but it will consume the rest of the buffer.
line.stop();
line.flush();
} catch (UnsupportedAudioFileException | IOException | LineUnavailableException e) {
e.printStackTrace();
} finally {
if (line != null)
line.close();
if (AIS != null)
AIS.close();
}
}
Still, some optimization can be done, but it work.

Related

Audio - Streaming Audio from Java is Choppy

My main objective is to create live streaming of encrypted voice chat from mic.
The encrypted audio is then transmitted over the network from one client to another.
The problem is that the audio is always getting stuttering and choppy while running the program (streaming).
I tried different types of hardware (PC, laptop, Raspberry Pi).
Different OSes as well.
Only sampling un-encrypted audio to eliminated any issue causes by the encryption algorithm.
Changing audio sample rate.
Unfortunately everything failed.
To make it simple, I only included the code needed to transmit the audio over the network without the encryption.
MAIN CLASS - both sender and receiver
package com.emaraic.securevoice;
import com.emaraic.securevoice.utils.AES;
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetAddress;
import javax.sound.sampled.*;
public class SecureVoice
{
public static void main(String[] args)
{
Receiver rx = new Receiver();
rx.start();
Transmitter tx = new Transmitter();
tx.start();
}
public static AudioFormat getAudioFormat()
{ //you may change these parameters to fit you mic
float sampleRate = 8000.0f; //8000,11025,16000,22050,44100
int sampleSizeInBits = 16; //8,16
int channels = 1; //1,2
boolean signed = true; //true,false
boolean bigEndian = false; //true,false
return new AudioFormat(sampleRate, sampleSizeInBits, channels, signed, bigEndian);
}
public static final String ANSI_BOLD = "\033[0;1m"; //not working in NetBeans
public static final String ANSI_RESET = "\033[0m";
public static final String ANSI_BLACK = "\033[30m";
public static final String ANSI_RED = "\033[31m";
public static final String ANSI_GREEN = "\033[32;4m";
public static final String ANSI_YELLOW = "\033[33m";
public static final String ANSI_BLUE = "\033[34m";
public static final String ANSI_PURPLE = "\033[35m";
public static final String ANSI_CYAN = "\033[36m";
public static final String ANSI_WHITE = "\033[37m";
}
SENDER
package com.emaraic.securevoice;
import com.emaraic.securevoice.utils.AES;
import java.io.*;
import java.io.File;
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetAddress;
import java.text.SimpleDateFormat;
import java.util.Date;
import javax.sound.sampled.AudioFileFormat;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.DataLine;
import javax.sound.sampled.Mixer;
import javax.sound.sampled.Port;
import javax.sound.sampled.TargetDataLine;
public class Transmitter extends Thread
{
// these parameters must be copied and used in the Receiver class of the other client
private static final String TX_IP = "10.101.114.179"; //ip to send to
private static final int TX_PORT = 1034;
#Override
public void run()
{
SecureVoice color = new SecureVoice();
Mixer.Info minfo[] = AudioSystem.getMixerInfo();
System.out.println(color.ANSI_BLUE + "Detecting sound card drivers...");
for (Mixer.Info minfo1 : minfo)
{
System.out.println(" " + minfo1);
}
if (AudioSystem.isLineSupported(Port.Info.MICROPHONE))
{
try
{
DataLine.Info dataLineInfo = new DataLine.Info(TargetDataLine.class, SecureVoice.getAudioFormat());
final TargetDataLine line = (TargetDataLine) AudioSystem.getLine(dataLineInfo); //recording from mic
line.open(SecureVoice.getAudioFormat());
line.start(); //start recording
System.out.println(color.ANSI_GREEN + "Recording...");
byte tempBuffer[] = new byte[line.getBufferSize()];
System.out.println(color.ANSI_BLUE + "Buffer size = " + tempBuffer.length + " bytes");
//AudioCapture audio = new AudioCapture(line); //capture the audio into .wav file
//audio.start();
while (true) //AES encryption
{
int read = line.read(tempBuffer, 0, tempBuffer.length);
byte[] encrypt = AES.encrypt(tempBuffer, 0, read);
// sendToUDP(encrypt);
sendToUDP(tempBuffer);
}
}
catch (Exception e)
{
System.out.println(e.getMessage());
System.exit(0);
}
}
}
public static void sendToUDP(byte soundpacket[])
{
try
{
// EncryptedAudio encrypt = new EncryptedAudio(soundpacket);
// encrypt.start();
DatagramSocket sock = new DatagramSocket();
sock.send(new DatagramPacket(soundpacket, soundpacket.length, InetAddress.getByName(TX_IP), TX_PORT));
sock.close();
}
catch (Exception e)
{
System.out.println(e.getMessage());
}
}
}
RECEIVER
package com.emaraic.securevoice;
import com.emaraic.securevoice.utils.AES;
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetAddress;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.DataLine;
import javax.sound.sampled.SourceDataLine;
public class Receiver extends Thread {
// these parameters must by used in the Transmitter class of the other client
private static final String RX_IP = "localhost";
private static final int RX_PORT = 1034;
#Override
public void run() {
byte b[] = null;
while (true) {
b = rxFromUDP();
speak(b);
}
}
public static byte[] rxFromUDP() {
try {
DatagramSocket sock = new DatagramSocket(RX_PORT);
byte soundpacket[] = new byte[8192];
DatagramPacket datagram = new DatagramPacket(soundpacket, soundpacket.length, InetAddress.getByName(RX_IP), RX_PORT);
sock.receive(datagram);
sock.close();
// return AES.decrypt(datagram.getData(),0,soundpacket.length); // soundpacket ;
return soundpacket; // if you want to hear encrypted form
} catch (Exception e) {
System.out.println(e.getMessage());
return null;
}
}
public static void speak(byte soundbytes[]) {
try {
DataLine.Info dataLineInfo = new DataLine.Info(SourceDataLine.class, SecureVoice.getAudioFormat());
try (SourceDataLine sourceDataLine = (SourceDataLine) AudioSystem.getLine(dataLineInfo)) {
sourceDataLine.open(SecureVoice.getAudioFormat());
sourceDataLine.start();
sourceDataLine.write(soundbytes, 0, soundbytes.length);
sourceDataLine.drain();
}
} catch (Exception e) {
System.out.println(e.getMessage());
}
}
}
EXTRA LINK
http://emaraic.com/blog/secure-voice-chat
IDE Used
- Netbeans 11.1
Java JDK version
- Java 13 (Windows)
- OpenJDK11 (Linux)
Two problems. Network streamed data will have jitter in arrival time. Starting and stopping audio play will cause delay gaps and jitter due to OS and hardware driver overhead time. There is also the smaller problem of audio sample rate clock rate synchronization between the record and play systems. All of those can impact a continuous stream of audio samples at a fixed rate.
To avoid the audio start-up latency problem, don't stop your audio play or record system between network packets, always have audio data ready to play continuously at the current sample rate. To help cover network jitter, buffer some amount of audio data before starting playback, so there is always some audio ready to play even if the next network packet is sightly delayed.
You may have to gather some statistics on the audio startup and network latency and latency variation to determine a suitable amount to buffer. The alternative is an audio dropout concealment algorithm, which is far more complicated to implement.

JavaMail Transmission of Large and Binary MIME Messages(rfc3030)

I need to exchange with one of mail server, using RFC3030 for large mime messages.
Original task is: if MIME message size > 80MB, I need to use RFC3030.
How I understand, JavaMail can't do this "from the box"?
Maybe I can create some handler or extension for JavaMail that implement RFC3030?
Please help. I don't know what to do.
A quick look into SMTPTransport confirms: Plain old JavaMail does not support BDAT, it will always try to send with DATA command like this:
this.message.writeTo(data(), ignoreList);
finishData();
If you're not afraid (and have no legal reason not to) to tinker with core JDK classes, you could override the methods data() and finishData() as they're both protected (source code from here):
/**
* Send the <code>DATA</code> command to the SMTP host and return
* an OutputStream to which the data is to be written.
*
* #since JavaMail 1.4.1
*/
protected OutputStream data() throws MessagingException {
assert Thread.holdsLock(this);
issueSendCommand("DATA", 354);
dataStream = new SMTPOutputStream(serverOutput);
return dataStream;
}
/**
* Terminate the sent data.
*
* #since JavaMail 1.4.1
*/
protected void finishData() throws IOException, MessagingException {
assert Thread.holdsLock(this);
dataStream.ensureAtBOL();
issueSendCommand(".", 250);
}
In order to support RFC3030, I'd suggest you start off by buffering the whole message into an ByteArrayOutputStream which you'll need to determine the size of the message to be sent. If "small" -> do as SMTPTransport would have done. If "big", split the bytes into chunks and send them in BDAT style. I'd suggest to end with 0 lentgh LAST BDAT and code
protected void finishData() throws IOException, MessagingException {
assert Thread.holdsLock(this);
dataStream.ensureAtBOL();
issueSendCommand("BDAT 0 LAST", 250);
}
-- EDIT --
Here's a very unsophisticated first approach, lots of things to do better. Most important a chunking-as-you-go implementation for the outputStream sending out chunks of data while the message.writeTo() keeps filling it. Filling up a big byte[] just to split it up into chunks later is very, very bad in terms of memory footprint. But it's easier to read that way as all of chunking and sending happens in one place. Please note that this example uses reflection to access serverOutput field in Oracle's SMTPTransport. So it might break anytime with no warning with any new release of JavaMail. Also my Exception handling does not follow RFC-3030 for now as no RSET is performed if BDAT fails.
package de.janschweizer;
import java.io.BufferedReader;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.OutputStream;
import java.io.PrintWriter;
import java.io.StringReader;
import java.lang.reflect.Field;
import javax.mail.MessagingException;
import javax.mail.Session;
import javax.mail.URLName;
import com.sun.mail.smtp.SMTPOutputStream;
public class SMTPTransport extends com.sun.mail.smtp.SMTPTransport {
//We can have our own copy - it's only used in the methods we override anyways.
private SMTPOutputStream dataStream;
private ByteArrayOutputStream baos;
public SMTPTransport(Session session, URLName urlname, String string, boolean bool) {
super(session, urlname, string, bool);
}
public SMTPTransport(Session session, URLName urlname) {
super(session, urlname);
}
protected OutputStream data() throws MessagingException {
assert(Thread.holdsLock(this));
if(!supportsExtension("CHUNKING")) {
return super.data();
}
baos = new ByteArrayOutputStream();
this.dataStream = new SMTPOutputStream(baos);
return this.dataStream;
}
protected void finishData() throws IOException, MessagingException {
assert(Thread.holdsLock(this));
if(!supportsExtension("CHUNKING")) {
super.finishData();
return;
}
this.dataStream.ensureAtBOL();
dataStream.flush();
BufferedReader br = new BufferedReader(new StringReader(new String(baos.toByteArray())));
try {
//BAD reflection hack
Field fServerOutput = com.sun.mail.smtp.SMTPTransport.class.getDeclaredField("serverOutput");
fServerOutput.setAccessible(true);
OutputStream os = (OutputStream)fServerOutput.get(this);
//Do the Chunky
ByteArrayOutputStream bchunk = new ByteArrayOutputStream();
PrintWriter pw = new PrintWriter(bchunk);
String line = br.readLine();
int linecount = 0;
while(line != null) {
pw.println(line);
if(++linecount % 5000 == 0) {
pw.flush();
byte[] chunk = bchunk.toByteArray();
sendChunk(os, chunk);
bchunk = new ByteArrayOutputStream();
pw = new PrintWriter(bchunk);
}
line = br.readLine();
}
pw.flush();
byte[] chunk = bchunk.toByteArray();
sendLastChunk(os, chunk);
} catch (Exception e) {
throw new MessagingException("ReflectionError", e);
}
}
private void sendChunk(OutputStream os, byte[] chunk) throws MessagingException, IOException {
sendCommand("BDAT "+chunk.length);
os.write(chunk);
os.flush();
int rc = readServerResponse();
if(rc != 250) {
throw new MessagingException("Something very wrong");
}
}
private void sendLastChunk(OutputStream os, byte[] chunk) throws MessagingException, IOException {
sendCommand("BDAT "+chunk.length+" LAST");
os.write(chunk);
os.flush();
int rc = readServerResponse();
if(rc != 250) {
throw new MessagingException("Something very wrong");
}
}
}
With this META-INF/javamail.providers
protocol=smtp; type=transport; class=de.janschweizer.SMTPTransport; vendor=Jan Schweizer;

Checking the level of audio playback in a Mixer's Line?

I'm trying to figure out if sound of any kind is playing in Windows (by any application). If something is making a noise somewhere, I want to know about it!
After following the docs, I've found how to get a list of mixers on the machine, as well as lines for those mixers -- which, if I understand correctly, are what is used for input/output of the mixer.
However, the problem I'm having is that I don't know how to get the data I need from the line.
The only interface I see that has a notion of volume level is DataLine. The problem with that is that I can't figure out what returns an object that implements the dataline interface.
Enumerating all of the mixers and lines:
public static void printMixers() {
Mixer.Info[] mixers = AudioSystem.getMixerInfo();
for (Mixer.Info mixerInfo : mixers) {
Mixer mixer = AudioSystem.getMixer(mixerInfo);
try {
mixer.open();
Line.Info[] lines = mixer.getSourceLineInfo();
for (Line.Info linfo : lines) {
System.out.println(linfo);
}
}
catch (LineUnavailableException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
That code enumerates and displays all of the audio devices on my machine. From that, shouldn't one of those Lines contain some kind of playback level data?
Oh you wish to find the volume? Well, not all hardware supports it, but here is how you get the dataline.
public static SourceDataLine getSourceDataLine(Line.Info lineInfo){
try{
return (SourceDataLine) AudioSystem.getLine(lineInfo);
}
catch(Exception ex){
ex.printStackTrace();
return null;
}
}
Then just call SourceDataLine.getLevel() to get the volume. I hope this helps.
NB: If the sound is originating from outside the JVM or not via the JavaSound API, this method will not detect the sound as the JVM does not have access to the OS equivalent of the SourceDataLine.
UPDATE: Upon further research, getLevel() is not implemented on most Systems. So I have manually implemented the method based off this forum discussion: https://community.oracle.com/message/5391003
Here are the classes:
public class Main {
public static void main(String[] args){
MicrophoneAnalyzer mic = new MicrophoneAnalyzer(FLACFileWriter.FLAC);
System.out.println("HELLO");
mic.open();
while(true){
byte[] buffer = new byte[mic.getTargetDataLine().getFormat().getFrameSize()];
mic.getTargetDataLine().read(buffer, 0, buffer.length);
try{
System.out.println(getLevel(mic.getAudioFormat(), buffer));
}
catch(Exception e){
System.out.println("ERROR");
e.printStackTrace();
}
}
}
public static double getLevel(AudioFormat af, byte[] chunk) throws IOException{
PCMSigned8Bit converter = new PCMSigned8Bit(af);
if(chunk.length != converter.getRequiredChunkByteSize())
return -1;
AudioInputStream ais = converter.convert(chunk);
ais.read(chunk, 0, chunk.length);
long lSum = 0;
for(int i=0; i<chunk.length; i++)
lSum = lSum + chunk[i];
double dAvg = lSum / chunk.length;
double sumMeanSquare = 0d;
for(int j=0; j<chunk.length; j++)
sumMeanSquare = sumMeanSquare + Math.pow(chunk[j] - dAvg, 2d);
double averageMeanSquare = sumMeanSquare / chunk.length;
return (Math.pow(averageMeanSquare,0.5d));
}
}
The method I used only works on 8bitPCM so we have to convert the encoding to that using these two classes. Here is the general abstract converter class.
import java.io.ByteArrayInputStream;
import java.io.IOException;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
abstract class AbstractSignedLevelConverter
{
private AudioFormat srcf;
public AbstractSignedLevelConverter(AudioFormat sourceFormat)
{
srcf = sourceFormat;
}
protected AudioInputStream convert(byte[] chunk)
{
AudioInputStream ais = null;
if(AudioSystem.isConversionSupported( AudioFormat.Encoding.PCM_SIGNED,
srcf))
{
if(srcf.getEncoding() != AudioFormat.Encoding.PCM_SIGNED)
ais = AudioSystem.getAudioInputStream(
AudioFormat.Encoding.PCM_SIGNED,
new AudioInputStream(new ByteArrayInputStream(chunk),
srcf,
chunk.length * srcf.getFrameSize()));
else
ais = new AudioInputStream(new ByteArrayInputStream(chunk),
srcf,
chunk.length * srcf.getFrameSize());
}
return ais;
}
abstract public double convertToLevel(byte[] chunk) throws IOException;
public int getRequiredChunkByteSize()
{
return srcf.getFrameSize();
}
}
And here is the one for 8BitPCM
import java.io.IOException;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
public class PCMSigned8Bit extends AbstractSignedLevelConverter
{
PCMSigned8Bit(AudioFormat sourceFormat)
{
super(sourceFormat);
}
public double convertToLevel(byte[] chunk) throws IOException
{
if(chunk.length != getRequiredChunkByteSize())
return -1;
AudioInputStream ais = convert(chunk);
ais.read(chunk, 0, chunk.length);
return (double)chunk[0];
}
}
This is for TargetDataLine which may not work in your use case, but you could build a wrapper around SourceDataLine and use this to properly implement these methods. Hopes this helps.

How to read an audio file? Which method should I use?

I have a panel with 2 buttons. When I click on the button 1, I'd simply like to read an audio file (a .WAV in that case). Then, when I click on the button 2, I'd like to stop the music.
I do some research, but I'm a little confused about the different methods.
Which one is the best in my case ? Can someone explains the difference between AudioClip, JavaSound and JavaMediaFramework please ?
I've also try an example, but it contains errors.
Here is my Main.class :
import java.io.ByteArrayInputStream;
import java.io.InputStream;
public class Main
{
public static void main(String[] args)
{
SoundPlayer player = new SoundPlayer("C:/Documents and Settings/All Users/Documents/Ma musique/Échantillons de musique/Symphonie n° 9 de Beethoven (scherzo).wma");
InputStream stream = new ByteArrayInputStream(player.getSamples());
player.play(stream);
}
}
Here is my SoundPlayer.class :
import java.io.DataInputStream;
import java.io.File;
import java.io.IOException;
import java.io.InputStream;
import javax.sound.sampled.*;
public class SoundPlayer
{
private AudioFormat format;
private byte[] samples;
/**
*
* #param filename le lien vers le fichier song (URL ou absolute path)
*/
public SoundPlayer(String filename)
{
try
{
AudioInputStream stream = AudioSystem.getAudioInputStream(new File(filename));
format = stream.getFormat();
samples = getSamples(stream);
}
catch (UnsupportedAudioFileException e){e.printStackTrace();}
catch (IOException e){e.printStackTrace();}
}
public byte[] getSamples()
{
return samples;
}
public byte[] getSamples(AudioInputStream stream)
{
int length = (int)(stream.getFrameLength() * format.getFrameSize());
byte[] samples = new byte[length];
DataInputStream in = new DataInputStream(stream);
try
{
in.readFully(samples);
}
catch (IOException e){e.printStackTrace();}
return samples;
}
public void play(InputStream source)
{
int bufferSize = format.getFrameSize() * Math.round(format.getSampleRate() / 10);
byte[] buffer = new byte[bufferSize];
SourceDataLine line;
try
{
DataLine.Info info = new DataLine.Info(SourceDataLine.class, format);
line = (SourceDataLine)AudioSystem.getLine(info);
line.open(format, bufferSize);
}
catch (LineUnavailableException e)
{
e.printStackTrace();
return;
}
line.start();
try
{
int numBytesRead = 0;
while (numBytesRead != -1)
{
numBytesRead = source.read(buffer, 0, buffer.length);
if (numBytesRead != -1)
line.write(buffer, 0, numBytesRead);
}
}
catch (IOException e){e.printStackTrace();}
line.drain();
line.close();
}
}
LOGCAT :
javax.sound.sampled.UnsupportedAudioFileException: could not get audio input stream from input file
at javax.sound.sampled.AudioSystem.getAudioInputStream(Unknown Source)
at SoundPlayer.<init>(SoundPlayer.java:19)
at Main.main(Main.java:8)
Exception in thread "main" java.lang.NullPointerException
at java.io.ByteArrayInputStream.<init>(Unknown Source)
at Main.main(Main.java:9)
In advance, thanks a lot !
That exception will stay. *.wma files are not supported by standard.
Simplest solution would be to use *.wav files or other supported files
You can get more info on:
https://stackoverflow.com/tags/javasound/info
SoundPlayer player = new SoundPlayer("C:/Documents and Settings/All Users/" +
"Documents/Ma musique/Échantillons de musique/" +
"Symphonie n° 9 de Beethoven (scherzo).wma")
Ah, WMA. Great format, Java (Standard Edition) does not provide a Service Provider Interface that supports it.
You will either need to supply an SPI to allow Java Sound to support it, or use a different API. I don't know of any APIs that provide support for WMA. Can you encode it in a different format?
See the Java Sound info. page for a way to support MP3, but it requires the MP3 SPI from JMF.
Write down the full path of your music file it will works
I've found a solution to my problem.
In my case, the use of JAVAZOOM librairy is good.
Here is a sample, which only play an audio file when launching (no graphical part)
public class Sound
{
private boolean isPlaying = false;
private AdvancedPlayer player = null;
public Sound(String path) throws Exception
{
InputStream in = (InputStream)new BufferedInputStream(new FileInputStream(new File(path)));
player = new AdvancedPlayer(in);
}
public Sound(String path,PlaybackListener listener) throws Exception
{
InputStream in = (InputStream)new BufferedInputStream(new FileInputStream(new File(path)));
player = new AdvancedPlayer(in);
player.setPlayBackListener(listener);
}
public void play() throws Exception
{
if (player != null)
{
isPlaying = true;
player.play();
}
}
public void play(int begin,int end) throws Exception
{
if (player != null)
{
isPlaying = true;
player.play(begin,end);
}
}
public void stop() throws Exception
{
if (player != null)
{
player.stop();
isPlaying = false;
}
}
public boolean isPlaying()
{
return isPlaying;
}
public static void main(String[] args)
{
System.out.println("lecture de son");
try
{
Sound sound = new Sound("C:/Documents and Settings/cngo/Bureau/Stage-Save/TCPIP_AndroidJava/TCPIP_V6_Sound/OpeningSuite.mp3");
System.out.println("playing : " + sound.isPlaying());
sound.play();
System.out.println("playing : " + sound.isPlaying());
}
catch (Exception e){e.printStackTrace();}
}
}
Thanks to #murtaza.webdev for his answers !

Audio recorder problem in java

I have a problem while recording the audio. I created a servlet and I modified the java sound API demo code to some extent and finally I can record the audio. The problem is that when I play the audio I can see the total time of the audio stored as 645.45 or something like that, but I have been recording the audio only for a couple of mins. One more problem is the audio is getting saved in the Eclipse directory instead of the project directory.
This is the servlet code.
package com;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.IOException;
import java.io.InputStream;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.sound.sampled.AudioFileFormat;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.DataLine;
import javax.sound.sampled.Clip;
import javax.sound.sampled.LineUnavailableException;
import javax.sound.sampled.SourceDataLine;
import javax.sound.sampled.TargetDataLine;
public class SoundRecorder extends HttpServlet {
private static final long serialVersionUID = 1L;
static protected boolean running;
static ByteArrayOutputStream out;
double fileName = Math.random();
//strFilename = nowLong.toString();
public SoundRecorder() {
System.out.println("Filename will be..." + fileName + ".wav");
}
public void init() {
}
public void destroy() {
}
public void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
System.out.println("call received..");
String method = request.getParameter("method");
System.out.println(method);
if("record".equalsIgnoreCase(method)) {
captureAudio(true);
}
else if("stop".equalsIgnoreCase(method)) {
captureAudio(false);
}
else if("play".equalsIgnoreCase(method)) {
System.out.println("yet to write");
playAudio();
}
}
public void doPost(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
System.out.println("call received..");
String method = request.getParameter("method");
System.out.println(method);
doGet(request, response);
}
private void captureAudio(boolean capturing) {
File outputFile = new File(fileName + ".wav");
AudioFormat audioFormat = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED,44100.0F, 16, 2, 4, 44100.0F, false);
DataLine.Info info = new DataLine.Info(TargetDataLine.class, audioFormat);
TargetDataLine targetDataLine = null;
try
{
targetDataLine = (TargetDataLine) AudioSystem.getLine(info);
targetDataLine.open(audioFormat);
}
catch (LineUnavailableException e)
{
System.out.println("unable to get a recording line");
e.printStackTrace();
System.exit(1);
}
AudioFileFormat.Type targetType = AudioFileFormat.Type.WAVE;
final Recorder recorder = new Recorder(targetDataLine,targetType,outputFile);
System.out.println("Recording...");
if(capturing){
recorder.start();
}
else {
recorder.stopRecording();
}
}
private void playAudio() {
try {
File file = new File(fileName + ".wav");
AudioInputStream stream = AudioSystem.getAudioInputStream(file);
AudioFormat format = stream.getFormat();
DataLine.Info info = new DataLine.Info(Clip.class, stream.getFormat());
Clip clip = (Clip) AudioSystem.getLine(info);
clip.open(stream);
clip.start();
} catch (Exception e) {
System.err.println("Line unavailable: " + e);
System.exit(-4);
}
}
}
And this is the recorder class
public class Recorder extends Thread {
private TargetDataLine m_line;
private AudioFileFormat.Type m_targetType;
private AudioInputStream m_audioInputStream;
private File m_outputFile;
public Recorder(TargetDataLine line,
AudioFileFormat.Type targetType,
File file)
{
m_line = line;
m_audioInputStream = new AudioInputStream(line);
m_targetType = targetType;
m_outputFile = file;
}
/** Starts the recording.
To accomplish this, (i) the line is started and (ii) the
thread is started.
*/
public void start()
{
m_line.start();
super.start();
}
/** Stops the recording.
*/
public void stopRecording()
{
m_line.stop();
m_line.close();
}
/** Main working method.
*/
public void run()
{
try
{
AudioSystem.write(
m_audioInputStream,
m_targetType,
m_outputFile);
}
catch (IOException e)
{
e.printStackTrace();
}
}
private static void closeProgram()
{
System.out.println("Program closing.....");
System.exit(1);
}
private static void out(String strMessage)
{
System.out.println(strMessage);
}
}
When developing with servlets, you need to realize that there's only one servlet instance throughout the whole webapp's lifetime, from startup until shutdown. So, the HTTP requests from all visitors, all sessions, all browser windows/tabs, etc will all share the same servlet instance. Also, when you make a variable static, it will be shared among all instances of the same class (which is not really relevant here since there's only one servlet instance anyway).
In other words, those variables which you've declared in the servlet are not threadsafe:
static protected boolean running;
static ByteArrayOutputStream out;
double fileName = Math.random();
There's only one of them and they are used by all visitors simultaneously. For the first two variables, which are continuously modified, this will lead to major threadsafety problems and for the third variable this means that all visitors record to the very same file. You need to declare them inside the doGet() block. You'd like to store the recording in the session by an unique request based token as key and then pass that key to the subsequent requests.
As to the problem of the file being saved at the unexpected location; when you use relative paths in java.io.File in a servlet, then it will be relative to the directory from where the webserver is started. If you start it from inside Eclipse, then it's saved in Eclipse directory. You'd like to use absolute path in java.io.File instead. If your intent is to save it in public webcontent (there where your JSP's and the /WEB-INF folder is located), then you need ServletContext#getRealPath() to convert a web path to an absolute disk path.
String relativeWebPath = "filename.ext";
String absoluteDiskPath = getServletContext().getRealPath(relativeWebPath);
File file = new File(absoluteDiskPath);
There's however another problem with this: all files will get erased whenever you redeploy the webapp. If you want a bit more permanent storage, then you'd like to store it outside the web project. E.g. C:/path/to/recordings.
File file = new File("C:/path/to/recordings/filename.ext");

Categories