really showing java outputstream progress and timeouts - java

I am having what feels like should be a solved problem. An Android application I'm writing sends a message much like SMS where a user can attach a file. I'm using an HttpUrlConnection to send this data to my server which basically boils down to a java.io.OutputStream (I'm wrapping it in a DataOutputStream).
Being on a mobile device, sometimes network connectivity can be downright terrible and a send may take way too long. I have the following two fundamental problems:
The user has no way of knowing the progress of the upload
If the network is terrible and progress abysmal - I'd rather just abort or have some reasonable timeout rather than sit there and try for 5-10 minutes.
Problem 1:
I have tried to show upload progress based on my outputstream write() calls which I'm doing with 4K buffers:
buffer = new byte[4096];
long totalBytes = 0;
while ((bytesRead = fis.read(buffer)) > -1) {
totalBytes += bytesRead;
dos.write(buffer, 0, bytesRead);
if(showProgress){
updateProgressBar(totalBytes);
}
}
While this shows me progress, it seems it just shows me how fast the app can transfer the file buffer to the OS network stack buffer. The progress bar finishes very quickly even on slow network and then sits there for another large amount of time before I finally get the JSON back from my server telling me the status of the send. Surely there is some way to get some progress from the time I pass it to the OS to the time my server tells me it received it?
Problem 2:
Sometimes network connectivity is bad but not bad enough that the hardware radio triggers the callback for no connection found (in this case I go into an offline mode). So when it's bad but not off my app will just sit there at a sending dialog until the cows come home. This is connected to problem 1 in that I need to somehow be aware of the actual throughput since OutputStream doesn't provide a timeout mechanism natively. If it fell below some threshhold I could cancel the connection and inform the user that they need to get somewhere with decent reception.
Side Note: Asynchronous send / output queue is not an option for me because I cannot persist a message to disk and therefore cannot guarantee the drafted message is indefinitely in case it fails to send at some later point. I need/want to block on send, I just need to be smarter about giving up and/or informing the user about what is going on.

it seems it just shows me how fast the app can transfer the file buffer to the OS network stack buffer.
It's worse than that. It shows you how fast the app can transfer your data into the HttpURLConnection's internal ByteArrayOutputStream, which it is writing to so it can see the content length and set the header before writing any content.
Fortunately it's also better that than. If you know in advance how long the data is, set fixed-length transfer mode. If you don't, set chunked transfer mode with a lowish chunk size like 1024.
You will then be seeing how quickly your application can move data into the socket send buffer; in the case of chunked transfer mode, in units of the chunk size. However once the socket send buffer fills up your writes will then block and you will be seeing actual network transfers, at least until you have done the last write. Writing and closing are both asynchronous from that point on, so your display will pop down earlier, but everybody has that problem.
Re problem 2, once the transfer has settled down to network speed as above you can then compute your own throughput and react accordingly if it is poor.

Related

client socket does not receive exactly what the server side socket sends

I have been developing an Android audio chatting program which behaves like a walkie talkie. After a user presses the talk button then the audio recorder starts to record what the user is saying and writes the audio bytes to a remote server through a socket. On the server side, the server socket just sends the audio bytes it received to the other client sockets.
I do not have a good way to control the behavior of these sockets. For example, to identify a client socket belongs which user? The socket does not has any field to carry the additional information other than the data it writes. So in the end, I worked out the solution is to use the same socket which transfer the audio data to transfer something like a username string. And this works well as the android client sends out a username string in cases like a client socket creates connection to server socket successfully.
The disaster happens when I try to send a username string to inform other clients who is talking when the user presses the talk button. Let me give you an example to make this clearer:
A user who's name is "user1" presses the talk button to talk.
The application sends the string "usr:user1" to the server side.
It then starts to send the audio data generated by the audio recorder.
On the server side, the server received the exact "user1" and the following audio data and resend to the other connected clients. But the problem is the client does not seem to be receiving "usr:user1" all of the time.
Here is how I check the received data:
is = socket.getInputStream();
byte[] buffer = new byte[minBufSize];
numOfReceived = is.read(buffer);
if(numOfReceived!=-1&&numOfReceived!=minBufSize){
byte[] ub = new byte[numOfReceived];
for(int i=0;i<numOfReceived;i++){
ub[i]=buffer[i];
}
String usersString = new String(ub, "UTF-8");
if(usersString.contains("hj:")){
System.out.println("current:");
final String userOfTalking=usersString.substring(3,usersString.length());
runOnUiThread(new Runnable() {
#Override
public void run() {
whoIsTalking.setText(userOfTalking+" is talking");
whoIsTalking.setVisibility(View.VISIBLE);
}
});
continue;
}
Actually, I have no idea whether the input stream contains audio data or string data. So I tried to use the return of inputstream.read() to find out how many bytes the inputstream read:
If the return number does not equal to -1 (socket closed) or the buffersize, I set in the outputstream.write, then I assume it a string.
But this is highly unreliable. For example, if I loop the command socket.getoutstream.write(buffer,0,100), then I am supposed to read a buffer 100 length from input stream. But it's not like this. I often got buffers which length are 60, or 40, or any number less than 100.
It's like the outputstream does not send exactly 100 bytes data as it declares. So my string data just mixes with the following audio data. So when the application sends the username when it just connects to the server, the others clients will receive the correct string because there is no following audio data to interfere with it.
Can you guys give me some of your opinions? Is my guessing right? How can I solve this problem? I managed to call Thread.sleep(300) after the application send the username string when the user pressed the talk button to make some room between sending the audio data in case they mix. But it does not work. Any help is much appreciated!
If I've read throug this properly... You send exactly 100 bytes, but the subsiquent read doesn't get 100, it gets less?
There can be a number of reasons for this. One is that you are not calling flush() when you write. If that's the case then you have a bug and you need to put an appropriate flush() call in your sending code.
Alternativly it could be because the OS is fragmenting the data between packets. This is unlikely for small packets (100 bytes) but very likely / necessary for large packets...
You should never rely on ALL your data turning up in a single read... you need to read multiple times to assemble all the data.
It's been quite a while since I asked this question and I am gonna give my own answer right now. Hopefully its not too late.
Actually #Philip Couling shed some very valuable insights in his answer, it helped me confirmed my guess about the cause of this issue - "the OS is fragmenting the data between packets". Thanks for his contribution again.
The approach to resolve this problem is from one of my friend. He told me that I could create a new socket in the client to connect to the same server socket to transfer some control information in string format to tell the server like who starts to talk,who stopped talking or even to allow people chatting over it. Each socket will send a string to the server to tell what they are doing and who they are belong to in the format like "audio stream: username" or "control info: username". And The server just store them into two arraylist or hashmap respectively. So every time a user presses the button to stream the audio, the corresponding control information string will be sent to the server to tell it the stream is from who and then the server redirects this information to other clients over sockets for controlling. So now we transfer the string data in a dedicated socket other than the one transferring audio stream. As a result, "The Os fragments the data" is no longer a problem because string data is too short to trigger the OS fragmenting them and also because we just send them on specific event, not as continuously as sending the audio stream.
But the new socket also brings a side effect. Because of the network delay, people may find they are still receiving the voice for a while after the application tell them someone stopped talking. The delay could be over 10 seconds in extreme network condition and may lead to strong noise if some one starts to talk during his phone is playing receiving voice.
For fixing this problem, transferring string informing in the audio socket may be the only choice to keep each side in sync. But I think we could insert some empty bytes in between the audio data and string data to make sure the string wont be mixed with other data.(empty bytes should not change the string.) However I have not tried this method yet. I will add the result after I have examined it.

When to send metadeta and stream of next song?

I'm writing an Icecast source. When dealing with one song, everything works fine. I send the Icecast header information, the metadeta and then the file stream.
My source needs to handle playlists. At the moment, once my application is finished writing the stream for Song A, it sends the metadeta for Song B and then sends the stream for Song B. After Song B is finished writing to Icecast, I send the metadeta for Song C and the file stream for Song C etc.
This issue with my current set up, is that every time the next song is sent (metadeta + stream), the icecast buffer resets. I'm assuming it is probably whenever a new metadeta update is sent.
How do I detect when one song (on Icecast) is finished so that I may send a new metadeta (and stream)?
EDIT: When I listen to the Icecast stream using a client (like VLC), I notice it does not even play the full song, even though the full song is being sent to and received by Icecast. It skips parts in the song. I'm thinking maybe there is a buffer limit on Icecast, and it is resetting the buffer when it reaches this limit? Should I then, purposely slow down the rate at which the source sends data to Icecast?
EDIT: I have determined that the issue is the rate at which I send the audio data to the Icecast server. At the moment, I am not slowing down the data transfer. I need to slow down this transfer so that the speed at which I write the audio data to Icecast is more/less the same speed at which a client would read the stream. I am thinking this rate would actually be the bitrate. If this is the case, I need to have the OutputStream thread sleep for an amount of time before sending the next chunk of data. How long do I make it sleep, assuming this is the issue?
If you continuously flush data to Icecast, the buffer is immediately filled and written over circularly. Most clients (especially VLC) will put backpressure on their stream during playback causing the TCP window size to drop to zero, meaning the server should not send any more data. When this happens, the server has to wait. Once the window size is increased again, the position at which the client was streaming before has been flushed out of the buffer by several minutes of audio, causing a glitch (or commonly, a disconnect).
As you have suspected, you must control the rate at which data is sent to Icecast. This rate must be at the same rate of playback. While this is approximated by the bitrate, it often isn't exact. The best way to handle this is to actually play back this audio programmatically while sending to the codec. You will need to do this soon anyway when encoding with several codecs at several bitrates.

JAVA : BufferdInputStream and BufferedOutputStream

I have several questions-
1. I have two computers connected by socket connection. When the program executes
outputStream.writeInt(value);
outputStream.flush();
what actually happens? Does the program wait until the other computer reads the integer value?
2. How can I empty the outputStream or inputStream? Meaning, when emptying
the outputStream or inputStream, whatever is written to that stream gets removed.
(please don't suggest to do it by closing the connection!)
I tried to empty the inputStream this way-
byte[] eatup=new byte[20*1024];
int available=0;
while(true)
{
available=serverInputStream.available();
if(available==0)
break;
serverInputStream.read(eatup,0,available);
}
eatup=null;
String fileName=(String)serverInputStream.readObject();
Program should not process the line as nothing else is being written on the outputStream .
But my program executes it anyway and throws a java.io.OptionalDataException error.
Note: I am working on a client-server file transfer project. The client sends files to
the server. The second code is for server terminal. If 'cancel button' is pressed on server
end then it stops reading bytes from the serverInputStream and sends a signal(I used int -1)
to the client. When client receieves this signal it stops sending data to the server, but I've
noticed that serverInputStream is not empty. So I need to empty this serverInputStream so that
the client computer is able to send the server computer files again(That's why I can't manage a lock
from read method)
1 - No. On the flush() the data will be written to the OS kernel which will likely immediately hand it to the network card driver, which in turn will send it to the receiving end. In a nutshell the send is fire and forget.
2 - As Jeffrey commented available() is not reliable for this sort of operation. If doing blocking IO then as he suggests you should just use read() speculatively. However it should be said that you really need to define a protocol on top of the raw streams, even if it's just using DataInput/DataOutputStream. When using raw write/read the golden rule is one write != one read. For example if you were to write 10 bytes on one side and had a reading loop on the other there is no guarantee that one read will read all 10 bytes. It may be "read" as any combination of chunks. Similarly two writes of 10 bytes might appear as one read of 20 bytes on the receiving side. Put another way there is no concept of a "packet" unless you create a higher level protocol on top of the raw bytes to do packets. An example would be each send is prefixed by a byte length so the receiving side knows how much data to expect in the current packet.
If you do need to do anything more complicated than a basic apps I strongly encourage you to investigate some higher level libraries that have solved many of the gnarly issues of network IO. I would recommend Netty which I use for production apps. However it is quite a big leap in understanding from a simple IO stream to Netty's more event based system. There may be other libraries somewhere in the middle.

How to "clear out" the receive buffer on a Java DatagramSocket?

I have a Java program that is constantly being sent UDP data from an external system.
Periodically, we need to stop receiving data (because another machine is handling it). During those times, my socket reader thread goes into a sleep loop. When it is time to start receiving packets, I go into socket.receive(Packet) again and have a buffer full of packets that I should not be handling. (The data came while in the "stop time".)
Is there a way to clear the buffer of a DatagramSocket?
If not, what is the best alternative? Set the buffer size to 0 when I go into the wait state and bring it back when I start to service packets again? Close the socket when I while I wait and open a new one when I come back?
Rather than having the downtime on the socket, have the downtime on whatever code processes the packets.
So the socket continues to receive like it normally would, but if it's on downtime, it just immediately drops the packet.
Not exactly the most efficient solution, but it's really easy to implement and might be useful as it leaves the node open in other cases for accepting different types of packets at different times.

What is a TCP window update?

I'm making my own custom server software for a game in Java (the game and original server software were written with Java). There isn't any protocol documentation available, so I am having to read the packets with Wireshark.
While a client is connecting the server sends it the level file in Gzip format. At about 94 packets into sending the level, my server crashes the client with an ArrayIndexOutOfBoundsException. According to the capture file from the original server, it sends a TCP Window Update at about that point. What is a TCP Window Update, and how would I send one using a SocketChannel?
TCP windows are used for flow control between the peers on a connection. With each ACK packet, a host will send a "window size" field. This field says how many bytes of data that host can receive before it's full. The sender is not supposed to send more than that amount of data.
The window might get full if the client isn't receiving data fast enough. In other words, the TCP buffers can fill up while the application is off doing something other than reading from its socket. When that happens, the client would send an ACK packet with the "window full" bit set. At that point, the server is supposed to stop sending data. Any packets sent to a machine with a full window will not be acknowledged. (This will cause a badly behaved sender to retransmit. A well-behaved sender will just buffer the outgoing data. If the buffer on the sending side fills up too, then the sending app will block when it tries to write more data to the socket!)
This is a TCP stall. It can happen for a lot of reasons, but ultimately it just means the sender is transmitting faster than the receiver is reading.
Once the app on the receiving end gets back around to reading from the socket, it will drain some of the buffered data, which frees up some space. The receiver will then send a "window update" packet to tell the sender how much data it can transmit. The sender starts transmitting its buffered data and traffic should flow normally.
Of course, you can get repeated stalls if the receiver is consistently slow.
I've worded this as if the sender and receiver are different, but in reality, both peers are exchanging window updates with every ACK packet, and either side can have its window fill up.
The overall message is that you don't need to send window update packets directly. It would actually be a bad idea to spoof one up.
Regarding the exception you're seeing... it's not likely to be either caused or prevented by the window update packet. However, if the client is not reading fast enough, you might be losing data. In your server, you should check the return value from your Socket.write() calls. It could be less than the number of bytes you're trying to write. This happens if the sender's transmit buffer gets full, which can happen during a TCP stall. You might be losing bytes.
For example, if you're trying to write 8192 bytes with each call to write, but one of the calls returns 5691, then you need to send the remaining 2501 bytes on the next call. Otherwise, the client won't see the remainder of that 8K block and your file will be shorter on the client side than on the server side.
This happens really deep in the TCP/IP stack; in your application (server and client) you don't have to worry about TCP windows. The error must be something else.
TCP WindowUpdate - This indicates that the segment was a pure WindowUpdate segment. A WindowUpdate occurs when the application on the receiving side has consumed already received data from the RX buffer causing the TCP layer to send a WindowUpdate to the other side to indicate that there is now more space available in the buffer. Typically seen after a TCP ZeroWindow condition has occurred. Once the application on the receiver retrieves data from the TCP buffer, thereby freeing up space, the receiver should notify the sender that the TCP ZeroWindow condition no longer exists by sending a TCP WindowUpdate that advertises the current window size.
https://wiki.wireshark.org/TCP_Analyze_Sequence_Numbers
A TCP Window Update has to do with communicating the available buffer size between the sender and the receiver. An ArrayIndexOutOfBoundsException is not the likely cause of this. Most likely is that the code is expecting some kind of data that it is not getting (quite possibly well before this point that it is only now referencing). Without seeing the code and the stack trace, it is really hard to say anything more.
You can dive into this web site http://www.tcpipguide.com/free/index.htm for lots of information on TCP/IP.
Do you get any details with the exception?
It is not likely related to the TCP Window Update packet
(have you seen it repeat exactly for multiple instances?)
More likely related to your processing code that works on the received data.
This is normally just a trigger, not the cause of your problem.
For example, if you use NIO selector, a window update may trigger the wake up of a writing channel. That in turn triggers the faulty logic in your code.
Get a stacktrace and it will show you the root cause.

Categories