Hi I've been writing a chat client and wanted to test the Java Sound API. I've managed to get sound working from the mic to the speakers on different computers via UDP. However the sound isn't very clear. To check whether this was because of lost packets etc in the UDP protocol I wrote a small test for the sound to go to the speakers on the same machine as the mic. The sound isn't any different which makes me think I have some settings wrong for reading or writing the sound. Can anybody have a look at my code and tell me how to make the sound clearer?
package test;
import java.awt.*;
import java.awt.event.*;
import java.io.*;
import javax.sound.sampled.*;
import javax.swing.*;
#SuppressWarnings("serial")
public class VoiceTest extends JFrame {
private JButton chat = new JButton("Voice");
private GUIListener gl = new GUIListener();
private IncomingSoundListener isl = new IncomingSoundListener();
private OutgoingSoundListener osl = new OutgoingSoundListener();
private boolean inVoice = true;
private boolean outVoice = false;
AudioFormat format = getAudioFormat();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
public VoiceTest() throws IOException {
super ("Test");
//new Thread(tl).start();
new Thread(isl).start();
Container contentPane = this.getContentPane();
this.setSize(200,100);
this.setLocationRelativeTo(null);
this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
chat.setBounds(10,10,80,30);
chat.addActionListener(gl);
contentPane.add(chat);
this.setVisible(true);
}
private AudioFormat getAudioFormat() {
float sampleRate = 8000.0F;
int sampleSizeBits = 16;
int channels = 1;
boolean signed = true;
boolean bigEndian = false;
//AudioFormat.Encoding.ULAW
return new AudioFormat(sampleRate, sampleSizeBits, channels, signed, bigEndian);
}
class GUIListener implements ActionListener {
public void actionPerformed(ActionEvent actionevent) {
String action = actionevent.getActionCommand();
switch (action) {
case "Mute":
outVoice = false;
chat.setText("Voice");
break;
case "Voice":
new Thread(osl).start();
outVoice = true;
chat.setText("Mute");
break;
}
}
}
class IncomingSoundListener implements Runnable {
#Override
public void run() {
try {
System.out.println("Listening for incoming sound");
DataLine.Info speakerInfo = new DataLine.Info(SourceDataLine.class, format);
SourceDataLine speaker = (SourceDataLine) AudioSystem.getLine(speakerInfo);
speaker.open(format);
speaker.start();
while(inVoice) {
byte[] data = baos.toByteArray();
baos.reset();
ByteArrayInputStream bais = new ByteArrayInputStream(data);
AudioInputStream ais = new AudioInputStream(bais,format,data.length);
int numBytesRead = 0;
if ((numBytesRead = ais.read(data)) != -1) speaker.write(data, 0, numBytesRead);
ais.close();
bais.close();
}
speaker.drain();
speaker.close();
System.out.println("Stopped listening for incoming sound");
} catch (Exception e) {
e.printStackTrace();
}
}
}
class OutgoingSoundListener implements Runnable {
#Override
public void run() {
try {
System.out.println("Listening for outgoing sound");
DataLine.Info micInfo = new DataLine.Info(TargetDataLine.class, format);
TargetDataLine mic = (TargetDataLine) AudioSystem.getLine(micInfo);
mic.open(format);
byte tmpBuff[] = new byte[mic.getBufferSize()/5];
mic.start();
while(outVoice) {
int count = mic.read(tmpBuff,0,tmpBuff.length);
if (count > 0) baos.write(tmpBuff, 0, count);
}
mic.drain();
mic.close();
System.out.println("Stopped listening for outgoing sound");
} catch (Exception e) {
e.printStackTrace();
}
}
}
/**
* #param args
* #throws IOException
*/
public static void main(String[] args) throws IOException {
new VoiceTest();
}
}
You should try higher sampling rates and try to find acceptable quality/size ratio for your audio stream.
Checking the AudioFormat reference is also a good start for getting the idea.
Try changing local variables in your getAudioFormat() method to this:
private AudioFormat getAudioFormat() {
float sampleRate = 16000.0F;
int sampleSizeBits = 16;
int channels = 1;
...
}
This is equivalent to a 256 kbps Mono audio file.
Related
I am learning to use the sound API of Java. I've watched a tutorial on YouTube where the instructor simply creates SourceDataLine and TargetDataLine instances and uses them in separate threads. He calls the threads one after another with a Thread.sleep() method in between. Within that sleeping period, the required sound is captured and then the sound is heard.
Now, in the program below, I've tried to extend the idea and tried to achieve a continuous stream of audio. That is, I will speak and the sound will be heard automatically. But it cannot be achieved. I know I am at wrong as I'm still a newbie in this regard. What changes should I make and where? It won't be a problem if there is a satisfying delay between recording and playing the sound.
P.S. I will try to use this with OpenCV video sharing in another program. If you know something about that, please feel free to share it. Thanks!
import javax.sound.sampled.*;
import java.io.ByteArrayOutputStream;
public class Main {
public boolean recording = true;
public int rate = 0;
public static void main(String[] args) throws Exception{
AudioFormat format = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, 44100, 16, 2, 4, 44100, false);
final ByteArrayOutputStream out = new ByteArrayOutputStream();
DataLine.Info info = new DataLine.Info(TargetDataLine.class, format);
final TargetDataLine targetLine = (TargetDataLine) AudioSystem.getLine(info);
info = new DataLine.Info(SourceDataLine.class, format);
final SourceDataLine sourceLine = (SourceDataLine) AudioSystem.getLine(info);
Main m = new Main();
new record(m, format, out, targetLine);
new play(m, format, out, sourceLine);
}
synchronized public void record(TargetDataLine targetLine, ByteArrayOutputStream out){
while(!recording){
try{
wait();
}catch(Exception e){
System.out.println(e);
}
}
byte[] data = new byte[targetLine.getBufferSize()/5];
int readBytes;
readBytes = targetLine.read(data, 0, data.length);
out.write(data, 0, readBytes);
}
synchronized public void play(SourceDataLine sourceLine, ByteArrayOutputStream out){
while(recording){
try{
wait();
}catch(Exception e){
System.out.println(e);
}
}
sourceLine.write(out.toByteArray(), 0, out.size());
}
synchronized public int change(){
rate++;
if(rate > 6000 && recording){
rate = 0;
recording = false;
notifyAll();
return 1;
}
else if(rate > 6000 && !recording){
rate = 0;
recording = true;
notifyAll();
return 1;
}
return 0;
}
}
class record implements Runnable{
private Main m;
private AudioFormat format;
private ByteArrayOutputStream out;
final TargetDataLine targetLine;
public record(Main m, AudioFormat format, ByteArrayOutputStream out, TargetDataLine targetLine) throws Exception{
this.m = m;
this.format = format;
this.out = out;
this.targetLine = targetLine;
targetLine.open();
System.out.println("Started recording...");
new Thread(this).start();
}
#Override
public void run() {
targetLine.start();
while(true){
m.record(targetLine, out);
while(m.change() == 1) targetLine.stop();
targetLine.start();
}
}
}
class play implements Runnable{
private Main m;
private AudioFormat format;
private ByteArrayOutputStream out;
final SourceDataLine sourceLine;
public play(Main m, AudioFormat format, ByteArrayOutputStream out, SourceDataLine sourceLine) throws Exception{
this.m = m;
this.format = format;
this.out = out;
this.sourceLine = sourceLine;
sourceLine.open();
System.out.println("Started playing...");
new Thread(this).start();
}
#Override
public void run() {
sourceLine.start();
while(true){
m.play(sourceLine, out);
while(m.change() == 1) sourceLine.stop();
sourceLine.start();
}
}
}
Edit:
I can get two streams run one after another as follows, but I have to hard-code the threads. I wrote four threads individually. How can I write efficient code, i.e. make use of the two earlier threads and record-play sound continuously? My synchronization doesn't seem to work.
import javax.sound.sampled.*;
import java.io.ByteArrayOutputStream;
import java.math.BigInteger;
public class Main {
public static void main(String[] args) throws Exception{
AudioFormat format = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, 44100, 16, 2, 4, 44100, false);
DataLine.Info info = new DataLine.Info(SourceDataLine.class, format);
final SourceDataLine sourceLine = (SourceDataLine) AudioSystem.getLine(info);
sourceLine.open();
info = new DataLine.Info(TargetDataLine.class, format);
final TargetDataLine targetLine = (TargetDataLine) AudioSystem.getLine(info);
targetLine.open();
final ByteArrayOutputStream out = new ByteArrayOutputStream();
Thread record = new Thread(){
#Override
public void run(){
targetLine.start();
byte[] data = new byte[targetLine.getBufferSize()/5];
int readBytes;
while(true){
readBytes = targetLine.read(data, 0, data.length);
out.write(data, 0, readBytes);
}
}
};
Thread play = new Thread(){
#Override
public void run(){
sourceLine.start();
while(true){
sourceLine.write(out.toByteArray(), 0, out.toByteArray().length);
}
}
};
final ByteArrayOutputStream out1 = new ByteArrayOutputStream();
Thread record1 = new Thread(() -> {
targetLine.start();
byte[] data = new byte[targetLine.getBufferSize()/5];
int readBytes;
while(true){
readBytes = targetLine.read(data, 0, data.length);
out1.write(data, 0, readBytes);
}
});
Thread play1 = new Thread(() -> {
sourceLine.start();
while(true){
sourceLine.write(out1.toByteArray(), 0, out1.toByteArray().length);
}
});
record.start();
System.out.println("Recording...");
Thread.sleep(4000);
targetLine.stop();
targetLine.drain();
targetLine.close();
play.start();
Thread.sleep(4000);
System.out.println("Playing...");
sourceLine.stop();
sourceLine.drain();
sourceLine.close();
targetLine.open();
sourceLine.open();
record1.start();
System.out.println("Recording...");
Thread.sleep(4000);
targetLine.stop();
targetLine.close();
play1.start();
Thread.sleep(4000);
System.out.println("Playing...");
sourceLine.stop();
sourceLine.close();
}
}
I know the question is a couple years old, but incase anyone else is also unsure, I'll give this a stab. I don't believe you need to have your TargetDataLine and SourceDataLine in seperate threads if you just want to record audio and simultaenously playback that audio instantly (as quick as the Java Sound API will allow anyway...).
If you don't want your program to hang then you'll need at least one thread for reading in audio, and outputting audio to your system. If you use 2 threads I believe the delay would be quite noticeable.
Below is a simple implementation for real-time recording and playback in a single thread. When I tested this with a microphone and speakers there is some latency, I'd guess < 1000ms. There's probably a much better way to do this with lower latency if you do some research. This may also be of interest: How to synchronize a TargetDataLine and SourceDataLine in Java (Synchronize audio recording and playback)
import javax.sound.sampled.*;
public class Main {
public static void main(String[] args) throws Exception {
AudioFormat format = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, 44100, 16, 2, 4, 44100, false);
DataLine.Info info = new DataLine.Info(SourceDataLine.class, format);
final SourceDataLine sourceLine = (SourceDataLine) AudioSystem.getLine(info);
sourceLine.open();
info = new DataLine.Info(TargetDataLine.class, format);
final TargetDataLine targetLine = (TargetDataLine) AudioSystem.getLine(info);
targetLine.open();
byte[] data = new byte[1024];
sourceLine.start();
targetLine.start();
Thread thread = new Thread() {
#Override
public void run() {
while (true) {
targetLine.read(data, 0, data.length);
sourceLine.write(data, 0, data.length);
}
}
};
thread.start();
}
}
I am trying to capture audio from pc(from speaker/headphones) and send it via socket(UDP, if possible) to another computer, which must play it back. I have found some code to do this:
Server:
import javax.sound.sampled.*;
import java.io.BufferedInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.net.ServerSocket;
import java.net.Socket;
public class Server {
ServerSocket MyService;
Socket clientSocket = null;
InputStream input;
AudioFormat audioFormat;
SourceDataLine sourceDataLine;
byte tempBuffer[] = new byte[10000];
static Mixer.Info[] mixerInfo = AudioSystem.getMixerInfo();
Server() throws LineUnavailableException {
try {
Mixer mixer_ = AudioSystem.getMixer(mixerInfo[0]);
audioFormat = getAudioFormat();
DataLine.Info dataLineInfo = new DataLine.Info(SourceDataLine.class, audioFormat);
sourceDataLine = (SourceDataLine) AudioSystem.getLine(dataLineInfo);
sourceDataLine.open(audioFormat);
sourceDataLine.start();
MyService = new ServerSocket(500);
clientSocket = MyService.accept();
input = new BufferedInputStream(clientSocket.getInputStream());
while (input.read(tempBuffer) != -1) {
sourceDataLine.write(tempBuffer, 0, 10000);
}
} catch (IOException e) {
e.printStackTrace();
}
}
private AudioFormat getAudioFormat() {
float sampleRate = 8000.0F;
int sampleSizeInBits = 8;
int channels = 1;
boolean signed = true;
boolean bigEndian = false;
return new AudioFormat(
sampleRate,
sampleSizeInBits,
channels,
signed,
bigEndian);
}
public static void main(String s[]) throws LineUnavailableException {
Server s2 = new Server();
}}
Client:
import javax.sound.sampled.*;
import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.net.Socket;
public class Client {
boolean stopCapture = false;
AudioFormat audioFormat;
TargetDataLine targetDataLine;
BufferedOutputStream out = null;
BufferedInputStream in = null;
Socket sock = null;
public static void main(String[] args) {
Client tx = new Client();
tx.captureAudio();
}
private void captureAudio() {
try {
sock = new Socket("192.168.1.38", 500);
out = new BufferedOutputStream(sock.getOutputStream());
in = new BufferedInputStream(sock.getInputStream());
Mixer.Info[] mixerInfo = AudioSystem.getMixerInfo();
audioFormat = getAudioFormat();
DataLine.Info dataLineInfo = new DataLine.Info(
TargetDataLine.class, audioFormat);
Mixer mixer = AudioSystem.getMixer(mixerInfo[2]);
targetDataLine = (TargetDataLine) mixer.getLine(dataLineInfo);
targetDataLine.open(audioFormat);
targetDataLine.start();
Thread captureThread = new CaptureThread();
captureThread.start();
} catch (Exception e) {
e.printStackTrace();
System.exit(0);
}
}
class CaptureThread extends Thread {
byte tempBuffer[] = new byte[10000];
#Override
public void run() {
stopCapture = false;
try {
while (!stopCapture) {
int cnt = targetDataLine.read(tempBuffer, 0,
tempBuffer.length);
out.write(tempBuffer);
}
} catch (Exception e) {
e.printStackTrace();
System.exit(0);
}
}
}
private AudioFormat getAudioFormat() {
float sampleRate = 8000.0F;
int sampleSizeInBits = 8;
int channels = 1;
boolean signed = true;
boolean bigEndian = false;
return new AudioFormat(sampleRate, sampleSizeInBits, channels, signed,
bigEndian);
}}
But client throws
java.lang.IllegalArgumentException: Line unsupported: interface TargetDataLine supporting format PCM_SIGNED 8000.0 Hz, 8 bit, mono, 1 bytes/frame,
at java.desktop/com.sun.media.sound.DirectAudioDevice.getLine(DirectAudioDevice.java:175)
at Client.captureAudio(Client.java:28)
at Client.main(Client.java:15)
and i do not know what to do(I know it is not UDP socket, but i firstly want to have some code which work). Advance thanks.
I'm writing a audio chat application in Java and I'm experiencing some latency/delays that mostly shows up if the application is left running for a while.
I recreated the problem in the below sample application. It simply loops sound from the microphone to the speaker. Initially it behaves as expected. When you press the button and speak into the microphone you hear yourself in the speaker with a tiny delay. However if the program is left running for a while (a week) then that delay is increased to several seconds.
I tested with different headsets. I tested with Java 8, 9, 10 and it consistently displays the same behaviour. I also experimented with drain() and flush() and so on but the only thing that gets rid of the delay is to close and recreate the TargetDataLine. Recreating the line is however not an option for my application since it takes to long and audio is unavailable while you recreate the line.
Is this a known limitation or am I doing something wrong? Very thankful for any insight or ideas!
import javax.sound.sampled.*;
import javax.swing.*;
import java.awt.event.MouseAdapter;
import java.awt.event.MouseEvent;
/**
* Created by asa on 2018-04-03.
*/
public class Main {
public static void main(String[] args) throws LineUnavailableException {
String title = "";
if (args.length > 0) {
title = args[0];
}
String mixerName = "USB”; // Part of the name of my headset
if (args.length > 1) {
mixerName = args[1];
}
AudioFormat format = new AudioFormat(8000f,
16,
1,
true,
false);
DataLine.Info targetInfo = new TargetDataLine.Info(TargetDataLine.class, format);
TargetDataLine mic = null;
for (Mixer.Info info : AudioSystem.getMixerInfo()) {
if (info.getName().contains(mixerName) && !info.getName().contains("Port")) {
Mixer m = AudioSystem.getMixer(info);
if (m.isLineSupported(targetInfo)) {
mic = (TargetDataLine) m.getLine(targetInfo);
break;
}
}
}
mic.open(format, 1280);
mic.start();
DataLine.Info sourceInfo = new DataLine.Info(SourceDataLine.class, format);
SourceDataLine speaker = null;
for (Mixer.Info info : AudioSystem.getMixerInfo()) {
if (info.getName().contains(mixerName) && !info.getName().contains("Port")) {
Mixer m = AudioSystem.getMixer(info);
if (m.isLineSupported(sourceInfo)) {
speaker = (SourceDataLine) m.getLine(sourceInfo);
break;
}
}
}
speaker.open(format, 8000);
speaker.start();
MicRunnable micRunnable = new MicRunnable(mic, speaker);
new Thread(micRunnable).start();
Frame.show(title, new Frame.PttListener() {
#Override
public void press() {
micRunnable.start();
}
#Override
public void release() {
micRunnable.stop();
}
});
}
private static class MicRunnable implements Runnable {
private final TargetDataLine _mic;
private final SourceDataLine _speaker;
private final Object runLock = new Object();
private volatile boolean running = false;
public MicRunnable(TargetDataLine mic, SourceDataLine speaker) {
_mic = mic;
_speaker = speaker;
}
public void start() {
synchronized (runLock) {
running = true;
runLock.notify();
}
}
public void stop() {
synchronized (runLock) {
running = false;
}
}
#Override
public void run() {
while (true) {
byte[] bytes = new byte[640];
_mic.read(bytes, 0, bytes.length);
if (running) {//tPeakGain(bytes) > 300) {
_speaker.write(bytes, 0, bytes.length);
}
}
}
}
private static class Frame extends JFrame {
interface PttListener {
void press();
void release();
}
private Frame(String title, PttListener listener) {
setTitle(title);
JPanel content = new JPanel();
JButton pttButton = new JButton("PTT");
pttButton.addMouseListener(new MouseAdapter() {
#Override
public void mousePressed(MouseEvent e) {
listener.press();
}
#Override
public void mouseReleased(MouseEvent e) {
listener.release();
}
});
content.add(pttButton);
setContentPane(content);
setSize(300, 100);
}
public static void show(String title, Frame.PttListener pttListener) {
new Frame(title, pttListener).setVisible(true);
}
}
}
I need to write simple Java Client program to capture live audio streaming.
Requirement
RTP Audio Packets.
8kHz, 16-bit Linear Samples (Linear PCM).
4 frames of 20ms audio will be sent in each RTP Packet.
After some search I found sample code on internet to capture the audio but it play beep sound.
Code
import java.io.ByteArrayInputStream;
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.DataLine;
import javax.sound.sampled.SourceDataLine;
public class Server {
AudioInputStream audioInputStream;
static AudioInputStream ais;
static AudioFormat format;
static boolean status = true;
static int port = 31007;
static int sampleRate = 44100;
static DataLine.Info dataLineInfo;
static SourceDataLine sourceDataLine;
public static void main(String args[]) throws Exception
{
System.out.println("Server started at port:"+port);
#SuppressWarnings("resource")
DatagramSocket serverSocket = new DatagramSocket(port);
/**
* Formula for lag = (byte_size/sample_rate)*2
* Byte size 9728 will produce ~ 0.45 seconds of lag. Voice slightly broken.
* Byte size 1400 will produce ~ 0.06 seconds of lag. Voice extremely broken.
* Byte size 4000 will produce ~ 0.18 seconds of lag. Voice slightly more broken then 9728.
*/
byte[] receiveData = new byte[4096];
format = new AudioFormat(sampleRate, 16, 2, true, false);
dataLineInfo = new DataLine.Info(SourceDataLine.class, format);
sourceDataLine = (SourceDataLine) AudioSystem.getLine(dataLineInfo);
sourceDataLine.open(format);
sourceDataLine.start();
//FloatControl volumeControl = (FloatControl) sourceDataLine.getControl(FloatControl.Type.MASTER_GAIN);
//volumeControl.setValue(1.00f);
DatagramPacket receivePacket = new DatagramPacket(receiveData, receiveData.length);
ByteArrayInputStream baiss = new ByteArrayInputStream(receivePacket.getData());
while (status == true)
{
System.out.println("Reciving Packets");
serverSocket.receive(receivePacket);
ais = new AudioInputStream(baiss, format, receivePacket.getLength());
toSpeaker(receivePacket.getData());
}
sourceDataLine.drain();
sourceDataLine.close();
}
public static void toSpeaker(byte soundbytes[]) {
try
{
System.out.println("At the speaker");
sourceDataLine.write(soundbytes, 0, soundbytes.length);
} catch (Exception e) {
System.out.println("Not working in speakers...");
e.printStackTrace();
}
}
}
I think I can not able to find the proper format to capture packets send in given format ?
Can any one help me to find find the proper AudioFormat to capture this audio streaming or any link pointing to same will be helpful for me... Thanks... :)
Answer
float sampleRate = 8000;
int sampleSizeInBits = 16;
int channels = 1;
boolean signed = true;
boolean bigEndian = true;
AudioFormat(sampleRate, sampleSizeInBits, channels, signed, bigEndian);
UDP + RTP Packet Format
While buffering minus 12 bytes from data as it contains RTP header information.
receivePacket = new DatagramPacket(receiveData, receiveData.length);
byte[] packet = new byte[receivePacket.getLength() - 12];
serverSocket.receive(receivePacket);
packet = Arrays.copyOfRange(receivePacket.getData(), 12, receivePacket.getLength());
hope this will help you in future or feel free to correct if its wrong Thanks..
You can try this implementation of Client and Server based on Datagram Sockets. It uses a mono 8000Hz 16bit signed big endian audio format. Server is running on port number 9786, while the client is using port number 8786. I guess the code is quite simple to understand.
Server:
import java.io.*;
import java.net.*;
import javax.sound.sampled.*;
public class Server {
ByteArrayOutputStream byteOutputStream;
AudioFormat adFormat;
TargetDataLine targetDataLine;
AudioInputStream InputStream;
SourceDataLine sourceLine;
private AudioFormat getAudioFormat() {
float sampleRate = 8000.0F;
int sampleSizeInBits = 16;
int channels = 1;
boolean signed = true;
boolean bigEndian = true;
return new AudioFormat(sampleRate, sampleSizeInBits, channels, signed, bigEndian);
}
public static void main(String args[]) {
new Server().runVOIP();
}
public void runVOIP() {
try {
DatagramSocket serverSocket = new DatagramSocket(9786);
byte[] receiveData = new byte[4096];
while (true) {
DatagramPacket receivePacket = new DatagramPacket(receiveData, receiveData.length);
serverSocket.receive(receivePacket);
System.out.println("RECEIVED: " + receivePacket.getAddress().getHostAddress() + " " + receivePacket.getPort());
try {
byte audioData[] = receivePacket.getData();
InputStream byteInputStream = new ByteArrayInputStream(audioData);
AudioFormat adFormat = getAudioFormat();
InputStream = new AudioInputStream(byteInputStream, adFormat, audioData.length / adFormat.getFrameSize());
DataLine.Info dataLineInfo = new DataLine.Info(SourceDataLine.class, adFormat);
sourceLine = (SourceDataLine) AudioSystem.getLine(dataLineInfo);
sourceLine.open(adFormat);
sourceLine.start();
Thread playThread = new Thread(new PlayThread());
playThread.start();
} catch (Exception e) {
System.out.println(e);
System.exit(0);
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
class PlayThread extends Thread {
byte tempBuffer[] = new byte[4096];
public void run() {
try {
int cnt;
while ((cnt = InputStream.read(tempBuffer, 0, tempBuffer.length)) != -1) {
if (cnt > 0) {
sourceLine.write(tempBuffer, 0, cnt);
}
}
} catch (Exception e) {
System.out.println(e);
System.exit(0);
}
}
}
}
Client:
import java.io.*;
import java.net.*;
import javax.sound.sampled.*;
public class Client {
boolean stopaudioCapture = false;
ByteArrayOutputStream byteOutputStream;
AudioFormat adFormat;
TargetDataLine targetDataLine;
AudioInputStream InputStream;
SourceDataLine sourceLine;
public static void main(String args[]) {
new Client();
}
public Client() {
captureAudio();
}
private AudioFormat getAudioFormat() {
float sampleRate = 8000.0F;
int sampleSizeInBits = 16;
int channels = 1;
boolean signed = true;
boolean bigEndian = true;
return new AudioFormat(sampleRate, sampleSizeInBits, channels, signed, bigEndian);
}
private void captureAudio() {
try {
adFormat = getAudioFormat();
DataLine.Info dataLineInfo = new DataLine.Info(TargetDataLine.class, adFormat);
targetDataLine = (TargetDataLine) AudioSystem.getLine(dataLineInfo);
targetDataLine.open(adFormat);
targetDataLine.start();
Thread captureThread = new Thread(new CaptureThread());
captureThread.start();
} catch (Exception e) {
StackTraceElement stackEle[] = e.getStackTrace();
for (StackTraceElement val : stackEle) {
System.out.println(val);
}
System.exit(0);
}
}
class CaptureThread extends Thread {
byte tempBuffer[] = new byte[4096];
#Override
public void run() {
stopaudioCapture = false;
try {
DatagramSocket clientSocket = new DatagramSocket(8786);
InetAddress IPAddress = InetAddress.getByName("127.0.0.1");
int cnt;
while (!stopaudioCapture) {
cnt = targetDataLine.read(tempBuffer, 0, tempBuffer.length);
if (cnt > 0) {
DatagramPacket sendPacket = new DatagramPacket(tempBuffer, tempBuffer.length, IPAddress, 9786);
clientSocket.send(sendPacket);
}
}
} catch (Exception e) {
System.out.println("CaptureThread::run()" + e);
System.exit(0);
}
}
}
}
I have some code which gets input from the microphone, saves it as a .wav file and sends it to the server. On the server side, the .wav file will be received. Now, I want it to be modified such that the client should be able to send multiple .wav files and the server should receive them and store all of them in a buffer. Please help me.
Code on client side is as follows:
`import javax.swing.*;
import java.awt.*;
import java.awt.event.*;
import java.io.*;
import java.lang.*;
import java.net.*;
import javax.sound.sampled.*;
public class AudioRecorder extends JFrame
{
public final static int DEF_PORT=9;
public final static int MAX_SIZE=65507;
public static int flag=0;
boolean stopCapture = false;
ByteArrayOutputStream byteArrayOutputStream;
AudioFormat audioFormat;
TargetDataLine targetDataLine;
AudioInputStream audioInputStream;
SourceDataLine sourceDataLine;
//creating file
File file=new File("chat.wav");
FileOutputStream fout;
AudioFileFormat.Type fileType;
public AudioRecorder(){//constructor
try
{
fout=new FileOutputStream(file);
}
catch (FileNotFoundException e1)
{
e1.printStackTrace();
}
//button play,stop, capture
final JButton captureBtn = new JButton("Capture");
final JButton stopBtn = new JButton("Stop");
final JButton playBtn = new JButton("Save");
captureBtn.setEnabled(true);
stopBtn.setEnabled(false);
playBtn.setEnabled(false);
captureBtn.addActionListener(new ActionListener()
{
public void actionPerformed(ActionEvent e)
{
captureBtn.setEnabled(false);
stopBtn.setEnabled(true);
playBtn.setEnabled(false);
captureAudio();
}
} );
getContentPane().add(captureBtn);
stopBtn.addActionListener(new ActionListener()
{
public void actionPerformed(ActionEvent e)
{
captureBtn.setEnabled(true);
stopBtn.setEnabled(false);
playBtn.setEnabled(true);
//Terminate the capturing of input data from the microphone.
stopCapture = true;
}//end actionPerformed
}//end ActionListener
);//end addActionListener()
getContentPane().add(stopBtn);
playBtn.addActionListener(new ActionListener()
{
public void actionPerformed(ActionEvent e)
{
//Play back all of the data that was saved during capture.
saveAudio();
}//end actionPerformed
}//end ActionListener
);//end addActionListener()
getContentPane().add(playBtn);
getContentPane().setLayout(new FlowLayout());
setTitle("Capture/Playback Demo");
setDefaultCloseOperation(EXIT_ON_CLOSE);
setSize(250,70);
setVisible(true);
}//end constructor
//This method captures audio input from a microphone and saves it in a ByteStreamObject
private void captureAudio()
{
try{
//Get everything set up for capture
audioFormat = getAudioFormat();
DataLine.Info dataLineInfo = new DataLine.Info(TargetDataLine.class,audioFormat);
targetDataLine = (TargetDataLine)
AudioSystem.getLine(dataLineInfo);
targetDataLine.open(audioFormat);
targetDataLine.start();
//Create a thread to capture the microphone data and start it running. It will run until the Stop button is clicked.
Thread captureThread = new Thread(new CaptureThread());
captureThread.start();
}
catch (Exception e)
{
System.out.println(e);
System.exit(0);
}//end catch
}//end captureAudio method
//This method plays back the audio data that has been saved in the ByteArrayOutputStream
private void saveAudio()
{
try
{
//Get everything set up for playback. Get the previously-saved data into a byte array object.
byte audioData[] = byteArrayOutputStream.toByteArray();
//Get an input stream on the byte array containing the data
InputStream byteArrayInputStream = new ByteArrayInputStream(audioData);
AudioFormat audioFormat = getAudioFormat();
audioInputStream = new AudioInputStream(byteArrayInputStream,audioFormat,audioData.length/audioFormat.getFrameSize());
DataLine.Info dataLineInfo = new DataLine.Info(SourceDataLine.class,audioFormat);
sourceDataLine = (SourceDataLine)AudioSystem.getLine(dataLineInfo);
sourceDataLine.open(audioFormat);
sourceDataLine.start();
//flag=1;
//Create a thread to play back the data and start it running. It will run until all the data has been played back.
Thread saveThread = new Thread(new SaveThread());
saveThread.start();
saveThread.join();
try{
InetAddress server=InetAddress.getByName("127.0.0.1");
Socket soc = new Socket(server, 8020);
FileInputStream fis = new FileInputStream("chat.wav");
byte[] buffer = new byte[fis.available()];
fis.read(buffer);
ObjectOutputStream oos = new ObjectOutputStream(soc.getOutputStream()) ;
oos.writeObject(buffer);
}
catch(Exception e)
{
System.out.println("Error : "+e);
}
//function to record and save audio file
}
catch (Exception e)
{
System.out.println(e);
System.exit(0);
}//end catch
}//end playAudio
//This method creates and returns an AudioFormat object for a given set of format parameters.
//If these parameters don't work well for you, try some of the other alowable parameter values, which are shown in comments //following the declarations.
private AudioFormat getAudioFormat()
{
float sampleRate = 8000.0F;
//8000,11025,16000,22050,44100
int sampleSizeInBits = 16;
//8,16
int channels = 1;
//1,2
boolean signed = true;
//true,false
boolean bigEndian = false;
//true,false
return new AudioFormat(sampleRate,sampleSizeInBits,channels,signed,bigEndian);
}//end getAudioFormat
//===================================//
//Inner class to capture data from microphone
class CaptureThread extends Thread
{
//An arbitrary-size temporary holding buffer
byte tempBuffer[] = new byte[10000];
public void run(){
byteArrayOutputStream = new ByteArrayOutputStream();
stopCapture = false;
try{//Loop until stopCapture is set by another thread that services the Stop button.
while(!stopCapture){
//Read data from the internal buffer of the data line.
int cnt = targetDataLine.read(tempBuffer,0,tempBuffer.length);
if(cnt > 0){
//Save data in output stream
// object.
byteArrayOutputStream.write(tempBuffer, 0, cnt);
}//end if
}//end while
byteArrayOutputStream.close();
}catch (Exception e) {
System.out.println(e);
System.exit(0);
}//end catch
}//end run
}//end inner class CaptureThread
//===================================//
//Inner class to play back the data
// that was saved.
class SaveThread extends Thread{
byte tempBuffer[] = new byte[10000];
public void run(){
try{
int cnt;
//Keep looping until the input
// read method returns -1 for
// empty stream.
if (AudioSystem.isFileTypeSupported(AudioFileFormat.Type.AU,audioInputStream)) {
AudioSystem.write(audioInputStream, AudioFileFormat.Type.AU, file);
}
}catch (Exception e) {
System.out.println(e);
System.exit(0);
}//end catch
}//end run
}//end inner class PlayThread
//===================================//
public static void main(String args[])
{
new AudioRecorder();
}//end main
}//end outer class AudioCapture01.java
Code on server side:
import java.lang.*;
import java.io.*;
import java.net.*;
public class MyServer
{
public final static int DEF_PORT=9;
public final static int MAX_SIZE=65507;
public static void main(String args[])
{
//byte[] buffer=new byte[100000];
try
{
ServerSocket ser = new ServerSocket(8020);
Socket clientSocket = ser.accept();
ObjectInputStream ois = new
ObjectInputStream(clientSocket.getInputStream());
byte[] buffer = (byte[])ois.readObject();
FileOutputStream fos = new
FileOutputStream("a1.wav");
fos.write(buffer);
fos.close();
}
catch (Exception e)
{
e.printStackTrace();
}
}
}
Why don't you try sending the bytes?
byte[] content = Files.readAllBytes(f.toPath);
oos.writeObject(content);
byte[] content = (byte[]) ois.readObject();
Files.write(f.toPath(), content);
The problem here could also be with your tempBuffer[]. The latter's size should be the same as the file size that you are sending/receiving. You could dynamically specify the size of your tempBuffer[] as such:
byte [] tempBuffer = new byte [(int)wavFile.length()];