When to send metadeta and stream of next song? - java

I'm writing an Icecast source. When dealing with one song, everything works fine. I send the Icecast header information, the metadeta and then the file stream.
My source needs to handle playlists. At the moment, once my application is finished writing the stream for Song A, it sends the metadeta for Song B and then sends the stream for Song B. After Song B is finished writing to Icecast, I send the metadeta for Song C and the file stream for Song C etc.
This issue with my current set up, is that every time the next song is sent (metadeta + stream), the icecast buffer resets. I'm assuming it is probably whenever a new metadeta update is sent.
How do I detect when one song (on Icecast) is finished so that I may send a new metadeta (and stream)?
EDIT: When I listen to the Icecast stream using a client (like VLC), I notice it does not even play the full song, even though the full song is being sent to and received by Icecast. It skips parts in the song. I'm thinking maybe there is a buffer limit on Icecast, and it is resetting the buffer when it reaches this limit? Should I then, purposely slow down the rate at which the source sends data to Icecast?
EDIT: I have determined that the issue is the rate at which I send the audio data to the Icecast server. At the moment, I am not slowing down the data transfer. I need to slow down this transfer so that the speed at which I write the audio data to Icecast is more/less the same speed at which a client would read the stream. I am thinking this rate would actually be the bitrate. If this is the case, I need to have the OutputStream thread sleep for an amount of time before sending the next chunk of data. How long do I make it sleep, assuming this is the issue?

If you continuously flush data to Icecast, the buffer is immediately filled and written over circularly. Most clients (especially VLC) will put backpressure on their stream during playback causing the TCP window size to drop to zero, meaning the server should not send any more data. When this happens, the server has to wait. Once the window size is increased again, the position at which the client was streaming before has been flushed out of the buffer by several minutes of audio, causing a glitch (or commonly, a disconnect).
As you have suspected, you must control the rate at which data is sent to Icecast. This rate must be at the same rate of playback. While this is approximated by the bitrate, it often isn't exact. The best way to handle this is to actually play back this audio programmatically while sending to the codec. You will need to do this soon anyway when encoding with several codecs at several bitrates.

Related

Sockets, message rate limiter Java

Imagine I have a server that can produce messages at a rate of 10,000 messages per second. But my client can only receive up to a maximum of 1000 messages per second.
System 1
If my system sends 1000 messages in the 1st milisecond and then does nothing for the remaining 999ms.
System 2
My system sends 1 message per milisecond, hence in 1000ms (1second) it will send 1000 messages.
Q1) Which system is better given that the client can handle a maximum of 500 messages per second?
Q2) What will be the impact of system 1 on the client? Will it overwhelm the client?
Thanks
Wil it overwhelm the client: It depends of the size of your messages, and the socket buffer size. The messages the sender sends are buffered. If the client cannot consume because the buffer is full, the output stream the sender is using will block. When the client has consumed some messages, the sender can continue writing as his OutputStream gets unblocked.
A typical buffer size on a windows system used to be 8192 bytes, but size can differ by the OS and settings in the OS.
So System 1 will not overwhelm the client, it will block at a certain moment.
What is the best approach merely depends on the design of your application.
For example: I had a similar issue while writing to an Arduino via USB (not socket-client, but otherwise the same problem). In my problem, buffered messages where a problem because it were positions of a face tracking camera. Buffered positions were no longer relevant when the Arduino read them, but it MUST process them because such a buffer is a queue, and you can only get the most recent if you read out the old one's. The Arduino could never keep up with the messages being produced, because by the time a new position reached the Arduino code, it was outdated. So that was an "overwhelm".
I resolved this by using bi-directional communication. The Arduino would send a message to the producer saying: READY (to receive a message). Then the producer would send one (up-to-date) face tracking position. Then the Arduino repositioned the camera and requested a new message. This way, there was a kind of flow control, that prevented the producer to overflow the Arduino.
Neither is better. TCP will alter the actual flow whatever you do yourself.
Neither will overwhelm the client. If the client isn't keeping up, its socket receive buffer will fill up, and so wil your socket send buffer, and eventually you will block in send, or get EAGAIN/EWOULDBLOCK if you're in non-blocking mode.

client socket does not receive exactly what the server side socket sends

I have been developing an Android audio chatting program which behaves like a walkie talkie. After a user presses the talk button then the audio recorder starts to record what the user is saying and writes the audio bytes to a remote server through a socket. On the server side, the server socket just sends the audio bytes it received to the other client sockets.
I do not have a good way to control the behavior of these sockets. For example, to identify a client socket belongs which user? The socket does not has any field to carry the additional information other than the data it writes. So in the end, I worked out the solution is to use the same socket which transfer the audio data to transfer something like a username string. And this works well as the android client sends out a username string in cases like a client socket creates connection to server socket successfully.
The disaster happens when I try to send a username string to inform other clients who is talking when the user presses the talk button. Let me give you an example to make this clearer:
A user who's name is "user1" presses the talk button to talk.
The application sends the string "usr:user1" to the server side.
It then starts to send the audio data generated by the audio recorder.
On the server side, the server received the exact "user1" and the following audio data and resend to the other connected clients. But the problem is the client does not seem to be receiving "usr:user1" all of the time.
Here is how I check the received data:
is = socket.getInputStream();
byte[] buffer = new byte[minBufSize];
numOfReceived = is.read(buffer);
if(numOfReceived!=-1&&numOfReceived!=minBufSize){
byte[] ub = new byte[numOfReceived];
for(int i=0;i<numOfReceived;i++){
ub[i]=buffer[i];
}
String usersString = new String(ub, "UTF-8");
if(usersString.contains("hj:")){
System.out.println("current:");
final String userOfTalking=usersString.substring(3,usersString.length());
runOnUiThread(new Runnable() {
#Override
public void run() {
whoIsTalking.setText(userOfTalking+" is talking");
whoIsTalking.setVisibility(View.VISIBLE);
}
});
continue;
}
Actually, I have no idea whether the input stream contains audio data or string data. So I tried to use the return of inputstream.read() to find out how many bytes the inputstream read:
If the return number does not equal to -1 (socket closed) or the buffersize, I set in the outputstream.write, then I assume it a string.
But this is highly unreliable. For example, if I loop the command socket.getoutstream.write(buffer,0,100), then I am supposed to read a buffer 100 length from input stream. But it's not like this. I often got buffers which length are 60, or 40, or any number less than 100.
It's like the outputstream does not send exactly 100 bytes data as it declares. So my string data just mixes with the following audio data. So when the application sends the username when it just connects to the server, the others clients will receive the correct string because there is no following audio data to interfere with it.
Can you guys give me some of your opinions? Is my guessing right? How can I solve this problem? I managed to call Thread.sleep(300) after the application send the username string when the user pressed the talk button to make some room between sending the audio data in case they mix. But it does not work. Any help is much appreciated!
If I've read throug this properly... You send exactly 100 bytes, but the subsiquent read doesn't get 100, it gets less?
There can be a number of reasons for this. One is that you are not calling flush() when you write. If that's the case then you have a bug and you need to put an appropriate flush() call in your sending code.
Alternativly it could be because the OS is fragmenting the data between packets. This is unlikely for small packets (100 bytes) but very likely / necessary for large packets...
You should never rely on ALL your data turning up in a single read... you need to read multiple times to assemble all the data.
It's been quite a while since I asked this question and I am gonna give my own answer right now. Hopefully its not too late.
Actually #Philip Couling shed some very valuable insights in his answer, it helped me confirmed my guess about the cause of this issue - "the OS is fragmenting the data between packets". Thanks for his contribution again.
The approach to resolve this problem is from one of my friend. He told me that I could create a new socket in the client to connect to the same server socket to transfer some control information in string format to tell the server like who starts to talk,who stopped talking or even to allow people chatting over it. Each socket will send a string to the server to tell what they are doing and who they are belong to in the format like "audio stream: username" or "control info: username". And The server just store them into two arraylist or hashmap respectively. So every time a user presses the button to stream the audio, the corresponding control information string will be sent to the server to tell it the stream is from who and then the server redirects this information to other clients over sockets for controlling. So now we transfer the string data in a dedicated socket other than the one transferring audio stream. As a result, "The Os fragments the data" is no longer a problem because string data is too short to trigger the OS fragmenting them and also because we just send them on specific event, not as continuously as sending the audio stream.
But the new socket also brings a side effect. Because of the network delay, people may find they are still receiving the voice for a while after the application tell them someone stopped talking. The delay could be over 10 seconds in extreme network condition and may lead to strong noise if some one starts to talk during his phone is playing receiving voice.
For fixing this problem, transferring string informing in the audio socket may be the only choice to keep each side in sync. But I think we could insert some empty bytes in between the audio data and string data to make sure the string wont be mixed with other data.(empty bytes should not change the string.) However I have not tried this method yet. I will add the result after I have examined it.

How to send audio data to Icecast with proper timing?

I am writing an Icecast source. The source handles MP3 at the moment. The application can parse MP3 files to retrieve individual frames and other metadata. The application correctly sends metadata to the Icecast server.
The issue arises when the application attempts to send the MP3 frames to Icecast. It sends the frames too fast, causing skips in the audio when I listen via my media client (VLC).
I have read that Icecast does not handle the timing of the audio stream and that this is the source's job. I can determine the duration of the audio file and all the information regarding each frame.
How do I perform proper timing? Should I wait in between sending individual frames, batches of frames? What does the timing actually consist of?
One method I have attempted, is to cause the application to wait in between sending batches of frames, however this did not fix the timing issue.
You must send your audio data at the sample rate of the stream. The timing you must use is the timing of the playback rate. If you want your source stream to be 44.1kHz, you must send that data at 44.1kHz.
MP3 frame sizes are fixed at 1,152 samples. That means that if you are sending a stream at 44.1kHz, you must send 38.28125 frames per second to Icecast. I suggest having a large buffer on your source end so that you can decode at whatever rate is reasonable, and have another thread for keeping the timing when sending the data.

really showing java outputstream progress and timeouts

I am having what feels like should be a solved problem. An Android application I'm writing sends a message much like SMS where a user can attach a file. I'm using an HttpUrlConnection to send this data to my server which basically boils down to a java.io.OutputStream (I'm wrapping it in a DataOutputStream).
Being on a mobile device, sometimes network connectivity can be downright terrible and a send may take way too long. I have the following two fundamental problems:
The user has no way of knowing the progress of the upload
If the network is terrible and progress abysmal - I'd rather just abort or have some reasonable timeout rather than sit there and try for 5-10 minutes.
Problem 1:
I have tried to show upload progress based on my outputstream write() calls which I'm doing with 4K buffers:
buffer = new byte[4096];
long totalBytes = 0;
while ((bytesRead = fis.read(buffer)) > -1) {
totalBytes += bytesRead;
dos.write(buffer, 0, bytesRead);
if(showProgress){
updateProgressBar(totalBytes);
}
}
While this shows me progress, it seems it just shows me how fast the app can transfer the file buffer to the OS network stack buffer. The progress bar finishes very quickly even on slow network and then sits there for another large amount of time before I finally get the JSON back from my server telling me the status of the send. Surely there is some way to get some progress from the time I pass it to the OS to the time my server tells me it received it?
Problem 2:
Sometimes network connectivity is bad but not bad enough that the hardware radio triggers the callback for no connection found (in this case I go into an offline mode). So when it's bad but not off my app will just sit there at a sending dialog until the cows come home. This is connected to problem 1 in that I need to somehow be aware of the actual throughput since OutputStream doesn't provide a timeout mechanism natively. If it fell below some threshhold I could cancel the connection and inform the user that they need to get somewhere with decent reception.
Side Note: Asynchronous send / output queue is not an option for me because I cannot persist a message to disk and therefore cannot guarantee the drafted message is indefinitely in case it fails to send at some later point. I need/want to block on send, I just need to be smarter about giving up and/or informing the user about what is going on.
it seems it just shows me how fast the app can transfer the file buffer to the OS network stack buffer.
It's worse than that. It shows you how fast the app can transfer your data into the HttpURLConnection's internal ByteArrayOutputStream, which it is writing to so it can see the content length and set the header before writing any content.
Fortunately it's also better that than. If you know in advance how long the data is, set fixed-length transfer mode. If you don't, set chunked transfer mode with a lowish chunk size like 1024.
You will then be seeing how quickly your application can move data into the socket send buffer; in the case of chunked transfer mode, in units of the chunk size. However once the socket send buffer fills up your writes will then block and you will be seeing actual network transfers, at least until you have done the last write. Writing and closing are both asynchronous from that point on, so your display will pop down earlier, but everybody has that problem.
Re problem 2, once the transfer has settled down to network speed as above you can then compute your own throughput and react accordingly if it is poor.

Reading from InputStream

I need to write application which will be reeding data from InputStream. In short: my app will firstly connect to Bluetooth device. After connection my app will be reeding data From InputStream continuously. I mean that the device will send data for 20 milisec and app will be receive this data working for 24 hour maybe even more. For now I read this data in that way:
while((bytesReceived = is.read(buffer))>-1) {
//things to do with data
}
This loop receive data when it is in stream and stops when inputstream is close. My problem is that I think it is not optimal solution. After is.read(buffer) receive data it blocks waiting for next data what consume a lot of processor. Do you know any better way to read data what consume least processor power. Thanks for any help.
BTW. I write my app in Java on Android.
A blocking read does not consume CPU. The OS will put the calling thread/process to sleep.
That loop is fine.

Categories