How to send audio data to Icecast with proper timing? - java

I am writing an Icecast source. The source handles MP3 at the moment. The application can parse MP3 files to retrieve individual frames and other metadata. The application correctly sends metadata to the Icecast server.
The issue arises when the application attempts to send the MP3 frames to Icecast. It sends the frames too fast, causing skips in the audio when I listen via my media client (VLC).
I have read that Icecast does not handle the timing of the audio stream and that this is the source's job. I can determine the duration of the audio file and all the information regarding each frame.
How do I perform proper timing? Should I wait in between sending individual frames, batches of frames? What does the timing actually consist of?
One method I have attempted, is to cause the application to wait in between sending batches of frames, however this did not fix the timing issue.

You must send your audio data at the sample rate of the stream. The timing you must use is the timing of the playback rate. If you want your source stream to be 44.1kHz, you must send that data at 44.1kHz.
MP3 frame sizes are fixed at 1,152 samples. That means that if you are sending a stream at 44.1kHz, you must send 38.28125 frames per second to Icecast. I suggest having a large buffer on your source end so that you can decode at whatever rate is reasonable, and have another thread for keeping the timing when sending the data.

Related

getResponseAsStream - What does the InputStream point to?

I am curious as to how getResponseAsStream for HttpClient is actually working.
According to this article, you should use getResponseAsStream instead of loading the entire response into memory.
My question is how is this working, to where does the InputStream point to?
Take an example where a particular REST service request returns a generated JSON where would the server store that, so it can be streamed to the client? The main memory is the only option.
If that is the case you are not solving the problem of memory depletion. How is this really working?
Consider the following, extremely simplified scenario:
The server generates loads of data, e.g. by reading a large files. It writes the data via an OutputStream into a send buffer. The networking stack reads data from the send buffer and sends packages of data to the client. Here the incoming data is put into a receive buffer.
Eventually the receive buffer is full, and the client stops accepting data packages from the server. This causes the send buffer to fill up. At this point the server is paused, since it can no longer put data into the send buffer.
The client uses getResponseAsStream to get a InputStream implementation that reads data from the receive buffer. As soon as the client reads data through the InputStream the receive buffer empties, the networking stack on the client side again accepts data packages, which causes the send buffer on the sever side to drain. Now the server can write data into the buffer again.
This way the client can read any amount of data, and the system never needs more space than the send and receive buffer.
Of course, this is extremely simplified. There are more layers, and more buffers involved. But I hope this explains the basic principle.

When to send metadeta and stream of next song?

I'm writing an Icecast source. When dealing with one song, everything works fine. I send the Icecast header information, the metadeta and then the file stream.
My source needs to handle playlists. At the moment, once my application is finished writing the stream for Song A, it sends the metadeta for Song B and then sends the stream for Song B. After Song B is finished writing to Icecast, I send the metadeta for Song C and the file stream for Song C etc.
This issue with my current set up, is that every time the next song is sent (metadeta + stream), the icecast buffer resets. I'm assuming it is probably whenever a new metadeta update is sent.
How do I detect when one song (on Icecast) is finished so that I may send a new metadeta (and stream)?
EDIT: When I listen to the Icecast stream using a client (like VLC), I notice it does not even play the full song, even though the full song is being sent to and received by Icecast. It skips parts in the song. I'm thinking maybe there is a buffer limit on Icecast, and it is resetting the buffer when it reaches this limit? Should I then, purposely slow down the rate at which the source sends data to Icecast?
EDIT: I have determined that the issue is the rate at which I send the audio data to the Icecast server. At the moment, I am not slowing down the data transfer. I need to slow down this transfer so that the speed at which I write the audio data to Icecast is more/less the same speed at which a client would read the stream. I am thinking this rate would actually be the bitrate. If this is the case, I need to have the OutputStream thread sleep for an amount of time before sending the next chunk of data. How long do I make it sleep, assuming this is the issue?
If you continuously flush data to Icecast, the buffer is immediately filled and written over circularly. Most clients (especially VLC) will put backpressure on their stream during playback causing the TCP window size to drop to zero, meaning the server should not send any more data. When this happens, the server has to wait. Once the window size is increased again, the position at which the client was streaming before has been flushed out of the buffer by several minutes of audio, causing a glitch (or commonly, a disconnect).
As you have suspected, you must control the rate at which data is sent to Icecast. This rate must be at the same rate of playback. While this is approximated by the bitrate, it often isn't exact. The best way to handle this is to actually play back this audio programmatically while sending to the codec. You will need to do this soon anyway when encoding with several codecs at several bitrates.

How do I combine multiple javax.sound.sampled.TargetDataLines?

I'm creating a VOIP server & client system, but only 1/amount of users connected of the voice packets are played. I think it's because it can only play one stream of audio from one TargetDataLine, and only one TargetDataLine per device. And I'm writing multiple audio streams to it each second.
I'm calling line.write(t, 0, t.length); where line is my TargetDataLine, and t is my byte array containing samples. Is there a way to combine multiple audio streams into one mono stream before redistributing between clients?
I figured it out(I was googling wrong), you just need to add the samples together, and bitwise and it to the frame size.

really showing java outputstream progress and timeouts

I am having what feels like should be a solved problem. An Android application I'm writing sends a message much like SMS where a user can attach a file. I'm using an HttpUrlConnection to send this data to my server which basically boils down to a java.io.OutputStream (I'm wrapping it in a DataOutputStream).
Being on a mobile device, sometimes network connectivity can be downright terrible and a send may take way too long. I have the following two fundamental problems:
The user has no way of knowing the progress of the upload
If the network is terrible and progress abysmal - I'd rather just abort or have some reasonable timeout rather than sit there and try for 5-10 minutes.
Problem 1:
I have tried to show upload progress based on my outputstream write() calls which I'm doing with 4K buffers:
buffer = new byte[4096];
long totalBytes = 0;
while ((bytesRead = fis.read(buffer)) > -1) {
totalBytes += bytesRead;
dos.write(buffer, 0, bytesRead);
if(showProgress){
updateProgressBar(totalBytes);
}
}
While this shows me progress, it seems it just shows me how fast the app can transfer the file buffer to the OS network stack buffer. The progress bar finishes very quickly even on slow network and then sits there for another large amount of time before I finally get the JSON back from my server telling me the status of the send. Surely there is some way to get some progress from the time I pass it to the OS to the time my server tells me it received it?
Problem 2:
Sometimes network connectivity is bad but not bad enough that the hardware radio triggers the callback for no connection found (in this case I go into an offline mode). So when it's bad but not off my app will just sit there at a sending dialog until the cows come home. This is connected to problem 1 in that I need to somehow be aware of the actual throughput since OutputStream doesn't provide a timeout mechanism natively. If it fell below some threshhold I could cancel the connection and inform the user that they need to get somewhere with decent reception.
Side Note: Asynchronous send / output queue is not an option for me because I cannot persist a message to disk and therefore cannot guarantee the drafted message is indefinitely in case it fails to send at some later point. I need/want to block on send, I just need to be smarter about giving up and/or informing the user about what is going on.
it seems it just shows me how fast the app can transfer the file buffer to the OS network stack buffer.
It's worse than that. It shows you how fast the app can transfer your data into the HttpURLConnection's internal ByteArrayOutputStream, which it is writing to so it can see the content length and set the header before writing any content.
Fortunately it's also better that than. If you know in advance how long the data is, set fixed-length transfer mode. If you don't, set chunked transfer mode with a lowish chunk size like 1024.
You will then be seeing how quickly your application can move data into the socket send buffer; in the case of chunked transfer mode, in units of the chunk size. However once the socket send buffer fills up your writes will then block and you will be seeing actual network transfers, at least until you have done the last write. Writing and closing are both asynchronous from that point on, so your display will pop down earlier, but everybody has that problem.
Re problem 2, once the transfer has settled down to network speed as above you can then compute your own throughput and react accordingly if it is poor.

Reading from InputStream

I need to write application which will be reeding data from InputStream. In short: my app will firstly connect to Bluetooth device. After connection my app will be reeding data From InputStream continuously. I mean that the device will send data for 20 milisec and app will be receive this data working for 24 hour maybe even more. For now I read this data in that way:
while((bytesReceived = is.read(buffer))>-1) {
//things to do with data
}
This loop receive data when it is in stream and stops when inputstream is close. My problem is that I think it is not optimal solution. After is.read(buffer) receive data it blocks waiting for next data what consume a lot of processor. Do you know any better way to read data what consume least processor power. Thanks for any help.
BTW. I write my app in Java on Android.
A blocking read does not consume CPU. The OS will put the calling thread/process to sleep.
That loop is fine.

Categories