How do I combine multiple javax.sound.sampled.TargetDataLines? - java

I'm creating a VOIP server & client system, but only 1/amount of users connected of the voice packets are played. I think it's because it can only play one stream of audio from one TargetDataLine, and only one TargetDataLine per device. And I'm writing multiple audio streams to it each second.
I'm calling line.write(t, 0, t.length); where line is my TargetDataLine, and t is my byte array containing samples. Is there a way to combine multiple audio streams into one mono stream before redistributing between clients?

I figured it out(I was googling wrong), you just need to add the samples together, and bitwise and it to the frame size.

Related

Receive input from microphone from 2 processes at once

I've been working on the java speech recognition using sphynx4 and I currently have and issue.
I have an app that recognizes the microphone input using LiveSpeechRecognizer class of Sphynx4 which works fine. The issue is that after i added the class that also listens to microphone and transforms and visualizes the output.
Separately both classes works ok. But when combined in a single app i get the error:
LineUnavailableException: line with format PCM_SIGNED 44100.0 Hz, 8 bit, mono, 1 bytes/frame, not supported.
I have checked the issue and it seems to be caused by the simultaneous access to the microphone. I had an idea to use StreamSpeechRecognizer instead of a Live, but I failed to retrieve the stream from the microphone input. Tried AudioInputStream for that purpose.
Could you please suggest how can i adjust my code to get both: SpeechRecognition and Oscilloscope to use microphone simultaneously?
Thanks in advance.
UPD:
That's an my attempt to split the microphone input to use in both apps.
....
byte[] data = new byte[dataCaptureSize];
line.read(data, 0, data.length);
ByteArrayOutputStream out = new ByteArrayOutputStream();
out.write(data);
byte audioData[] = out.toByteArray();
InputStream byteArrayInputStream = new ByteArrayInputStream(audioData);
AudioInputStream audioInputStream = new AudioInputStream(byteArrayInputStream,
inputFormat,
audioData.length / inputFormat.getFrameSize());
....
That's how i convert it to the input stream which is than passed to the StreamSpeechRecognizer and the array of bytes is transformed with Fast Fourier Transform and passed to the graph. That doesn't work as it just freezes the graph all the time so the data displayed is not an actual one.
I tried to run recognition in separate thread but it didn't increase performance at all.
My code of splitting to threads is down below:
Thread recognitionThread = new Thread(new RecognitionThread(configuration,data));
recognitionThread.join();
recognitionThread.run();
UPD 2:
The input is from microphone.
The above AudioInputStream is passed to the StreamSpeechRecognizer:
StreamSpeechRecognizer nRecognizer = new StreamSpeechRecognizer(configuration);
nRecognizer.startRecognition(audioStream);
And the byte array is passed transformed by FFT and passed to the graph:
`
double[] arr = FastFourierTransform.TransformRealPart(data);
for (int i = 0; i < arr.length; i++) {
series1.getData().add(new XYChart.Data<>(i*22050/(arr.length), arr[i]));
`
Here is a plausible approach to consider.
First, write your own microphone reader. (There are tutorials on how to do this.) Then repackage that data as two parallel Lines that the other applications can read.
Another approach would be to check if either application has some sort of "pass through" capability enabled.
EDIT: added to clarify
This Java sound record utility code example opens a TargetDataLine to the microphone, and stores data from it into an array (lines 69, 70). Instead of storing the data in an array, I'm suggesting that you create two SourceDataLine objects and write the data out to each.
recordBytes = new ByteArrayOutputStream();
secondStreamBytes = new ByteArrayOutputStream();
isRunning = true;
while (isRunning) {
bytesRead = audioLine.read(buffer, 0, buffer.length);
recordBytes.write(buffer, 0, bytesRead);
secondStreamBytes.write(buffer, 0, bytesRead);
}
Hopefully it won't be too difficult to figure out how to configure your two programs to read from the created lines rather than from the microphone's line. I'm unable to provide guidance on how to do that.
EDIT 2:
I wish some other people would join in. I'm a little over my head with doing anything fancy with streams. And the code you are giving is so minimal I still don't understand what is happening or how things are connecting.
FWTW: (1) Is the data you are adding into "series1" is the streaming data? If so, can you add a line in that for loop, and write the same data to a stream consumed by the other class? (This would be a way of using the microphone data "in series" as opposed to "in parallel.")
(2) Data streams often involves code that blocks or that runs at varying paces due to the unpredictable way in which the cpu switches between tasks. So if you do write a "splitter" (as I tried to illustrate by modifying the microphone reading code I linked earlier) there could arise a situation where the code will only run as fast as the slower of the two "splits" at the given moment. You may need to incorporate some sort of buffering and use separate threads for the two recipients of the mike data.
I wrote my first buffering code recently, for a situation where a microphone-reading line is sending a stream to an audio-mixing funtion on another thread. I only wrote this a few weeks ago and it's the first time I dealt with trying to run a stream across a thread barrier threads, so I don't know if the idea I came up with is the best way to do this sort of thing. But it does manage to keep the feed from the mike to the mixer steady with no drop outs and no losses.
The mike reader reads a buffer of data, then adds this byte[] buffer into a ConcurrentLinkedQueue<Byte[]>.
From the other thread, the audio-mixing code polls the ConcurrentLinkedQueue for data.
I experimented a bit and currently have the size of the byte[] buffer at 512 bytes and the ConcurrentLinkedQueue is set to hold up to 12 "buffers" before it starts throwing away the oldest buffers (the structure is FIFO). This seems to be enough of these small buffers to accommodate when the microphone processing code temporarily gets ahead of the mixer.
The ConcurrentLinkedQueue has built in provisions to allow adding and polling to occur from two threads at the same time without throwing an exception. Whether this is something you have to write to help with a hand off, and what the best buffer size might be, I can't say. Maybe a much larger buffer with fewer buffers held in the Queue would be better.
Maybe someone else will weigh in, or maybe the suggestion will be worth experimenting with and trying out.
Anyway, that's about the best I can do, given my limited experience with this. I hope you are able to work something out.

How to send audio data to Icecast with proper timing?

I am writing an Icecast source. The source handles MP3 at the moment. The application can parse MP3 files to retrieve individual frames and other metadata. The application correctly sends metadata to the Icecast server.
The issue arises when the application attempts to send the MP3 frames to Icecast. It sends the frames too fast, causing skips in the audio when I listen via my media client (VLC).
I have read that Icecast does not handle the timing of the audio stream and that this is the source's job. I can determine the duration of the audio file and all the information regarding each frame.
How do I perform proper timing? Should I wait in between sending individual frames, batches of frames? What does the timing actually consist of?
One method I have attempted, is to cause the application to wait in between sending batches of frames, however this did not fix the timing issue.
You must send your audio data at the sample rate of the stream. The timing you must use is the timing of the playback rate. If you want your source stream to be 44.1kHz, you must send that data at 44.1kHz.
MP3 frame sizes are fixed at 1,152 samples. That means that if you are sending a stream at 44.1kHz, you must send 38.28125 frames per second to Icecast. I suggest having a large buffer on your source end so that you can decode at whatever rate is reasonable, and have another thread for keeping the timing when sending the data.

When to send metadeta and stream of next song?

I'm writing an Icecast source. When dealing with one song, everything works fine. I send the Icecast header information, the metadeta and then the file stream.
My source needs to handle playlists. At the moment, once my application is finished writing the stream for Song A, it sends the metadeta for Song B and then sends the stream for Song B. After Song B is finished writing to Icecast, I send the metadeta for Song C and the file stream for Song C etc.
This issue with my current set up, is that every time the next song is sent (metadeta + stream), the icecast buffer resets. I'm assuming it is probably whenever a new metadeta update is sent.
How do I detect when one song (on Icecast) is finished so that I may send a new metadeta (and stream)?
EDIT: When I listen to the Icecast stream using a client (like VLC), I notice it does not even play the full song, even though the full song is being sent to and received by Icecast. It skips parts in the song. I'm thinking maybe there is a buffer limit on Icecast, and it is resetting the buffer when it reaches this limit? Should I then, purposely slow down the rate at which the source sends data to Icecast?
EDIT: I have determined that the issue is the rate at which I send the audio data to the Icecast server. At the moment, I am not slowing down the data transfer. I need to slow down this transfer so that the speed at which I write the audio data to Icecast is more/less the same speed at which a client would read the stream. I am thinking this rate would actually be the bitrate. If this is the case, I need to have the OutputStream thread sleep for an amount of time before sending the next chunk of data. How long do I make it sleep, assuming this is the issue?
If you continuously flush data to Icecast, the buffer is immediately filled and written over circularly. Most clients (especially VLC) will put backpressure on their stream during playback causing the TCP window size to drop to zero, meaning the server should not send any more data. When this happens, the server has to wait. Once the window size is increased again, the position at which the client was streaming before has been flushed out of the buffer by several minutes of audio, causing a glitch (or commonly, a disconnect).
As you have suspected, you must control the rate at which data is sent to Icecast. This rate must be at the same rate of playback. While this is approximated by the bitrate, it often isn't exact. The best way to handle this is to actually play back this audio programmatically while sending to the codec. You will need to do this soon anyway when encoding with several codecs at several bitrates.

JAVA : BufferdInputStream and BufferedOutputStream

I have several questions-
1. I have two computers connected by socket connection. When the program executes
outputStream.writeInt(value);
outputStream.flush();
what actually happens? Does the program wait until the other computer reads the integer value?
2. How can I empty the outputStream or inputStream? Meaning, when emptying
the outputStream or inputStream, whatever is written to that stream gets removed.
(please don't suggest to do it by closing the connection!)
I tried to empty the inputStream this way-
byte[] eatup=new byte[20*1024];
int available=0;
while(true)
{
available=serverInputStream.available();
if(available==0)
break;
serverInputStream.read(eatup,0,available);
}
eatup=null;
String fileName=(String)serverInputStream.readObject();
Program should not process the line as nothing else is being written on the outputStream .
But my program executes it anyway and throws a java.io.OptionalDataException error.
Note: I am working on a client-server file transfer project. The client sends files to
the server. The second code is for server terminal. If 'cancel button' is pressed on server
end then it stops reading bytes from the serverInputStream and sends a signal(I used int -1)
to the client. When client receieves this signal it stops sending data to the server, but I've
noticed that serverInputStream is not empty. So I need to empty this serverInputStream so that
the client computer is able to send the server computer files again(That's why I can't manage a lock
from read method)
1 - No. On the flush() the data will be written to the OS kernel which will likely immediately hand it to the network card driver, which in turn will send it to the receiving end. In a nutshell the send is fire and forget.
2 - As Jeffrey commented available() is not reliable for this sort of operation. If doing blocking IO then as he suggests you should just use read() speculatively. However it should be said that you really need to define a protocol on top of the raw streams, even if it's just using DataInput/DataOutputStream. When using raw write/read the golden rule is one write != one read. For example if you were to write 10 bytes on one side and had a reading loop on the other there is no guarantee that one read will read all 10 bytes. It may be "read" as any combination of chunks. Similarly two writes of 10 bytes might appear as one read of 20 bytes on the receiving side. Put another way there is no concept of a "packet" unless you create a higher level protocol on top of the raw bytes to do packets. An example would be each send is prefixed by a byte length so the receiving side knows how much data to expect in the current packet.
If you do need to do anything more complicated than a basic apps I strongly encourage you to investigate some higher level libraries that have solved many of the gnarly issues of network IO. I would recommend Netty which I use for production apps. However it is quite a big leap in understanding from a simple IO stream to Netty's more event based system. There may be other libraries somewhere in the middle.

J2ME, InputStream hangs up after receving 40K of data over Bluetooth

On sending data over bluetooth from PC to my mobile(N73), the Input Stream seems to hang up.
InputStream is derived from StreamConnection.
PC software is built in VB.net.
Mobile in Java ME.
Does the InputStream have an internal buffer that needs to be emptied while reading large chunks of data?
Data is being received in chunks of 10Kb to 15Kb range, and the reading stops after receiving the 3rd chunk.
Strangely I am not receiving any exceptions.
I browsed through the InputStream class API documentation and couldn't find any InputStream clear or empty method.
There is only a reset() method, I don't know what its used for?
InputStream.reset() is a method you would call sometime after having used Inpustream.mark() to force the InputStream to create an internal buffer that would allow you to read the same data multiple times, assuming the InputStream supports it by returning true when calling InputStream.markSupported().
As far as the data transmission issue, we're talking about a handset running Series60 3rd edition on top of Symbian OS 9.1. Given how extensive the Symbian testing of JSR-82 was, an implementation bug as simple as a 40k limit on the InputStream seems unlikely.
Does the handset behavior change if the server sends smaller chunks at a much lower bitrate?
Does the handset process received data before reading some more?
What else is the MIDlet doing? Is everything else working as expected even after the bluetooth InputStream blocks?
I do remember a fairly important bug in the JSR-82 implementation that might have been fixed only after the initial N73 firmwares were created: do not use bluetooth at all in any event dispatching thread (not from any method like MIDlet.startApp(), Canvas.keyPressed(), CommandListener.commandAction(), PlayerListener.playerUpdate()...).
You are better off only using bluetooth from inside a Thread.run() method you wrote yourself.

Categories