So a very simple Server code:
while(true) {
Socket sock = serverSock.accept();
PrintWriter writer = new PrintWriter(sock.getOutputStream());
String advice = "advice here";
writer.println(advice);
writer.close();
}
And a simple Client that will read data from this Socket:
Socket s = new Socket(“127.0.0.1”, 4242);
InputStreamReader streamReader = new InputStreamReader(s.getInputStream());
BufferedReader reader = new BufferedReader(streamReader);
String advice = reader.readLine();
What I am trying is very high-level actually and quite simple. How does sock.getOutputStream is connected to s.getInputStream()?
How can the data that is sent over clients outputstream can be read from servers inputstream? I can not make the connection in my head and I can not visualize it.
My question is how the inputstream and outputstream object are connected? How can writer.println(advice); end up in reader.readLine()? How is the OutputStream connected to InputStream?
Any help is greatly appreciated.
Sockets uses TCP. If you are unfamiliar, it is a protocol which specifies the mechanics of transmitting data over the internet. The important part of the protocol to this question is the connection.
When 2 devices wish to communicate, a Socket is created on each device, for the port being used to send/receive. This Socket provides a line of communication, on which the server can listen. The Sender sends "Packets" of data across that line of communication, where they are received by the Receiver.
The packets carry a "payload" of data, once of which has data signifying it is the last packet. This allows the Receiver to interpret the full message and respond accordingly.
There are a lot of mechanisms involved in ensure all the data gets there and in the right order, but that is a little outside the scope of this question. Hope this helps!
The version of the Socket constructor in the client you call creates a connected socket directed at the specified endpoint. Since it is not shown, we presume the serverSock in the server was created and initialized to listen on that endpoint. When the client successfully connects, the initialized socket is returned, and correspondingly, the server's socket accept() returns a socket connected to the client.
Connected sockets permit bidirectional communication between the connected endpoints. What is written to one socket is delivered to and can be (eventually) read by the corresponding peer socket to which it is connected. The getOutputStream() and getInputStream() methods return stream objects that permit stream I/O operations that will pass the data through the corresponding socket from which it was created.
Below, I provide answers to the specific questions listed in one of your comments to my post.
Q: What happens (technically) if I write to outputstream but do not read from the inputstream?
A: The data is held inside a buffer until it is read. There is a limit to how much data the buffer can hold. When the limit is reached, the writer on the other socket will either block or be notified it has to wait for the peer socket to read out what has already been written.
Q: How long will it live?
A: Unread data will remain in the buffer until it is read out or until the connection is forcibly torn down with a reset. A connection reset is distinguished from a connection close in that the close indicates no more data will be sent, and the receiver gets this notification as a successful read of 0 bytes of data, while a reset indicates the connection is no longer valid, and any buffered data will be dropped and both read and write operations will fail.
Q: How much time do I have until I can read again?
A: You may read at any time, and it will succeed so long as the connection is still valid.
Q: Can I write twice and then read twice?
A: A connected socket is a byte stream. There is no true relation between "the number of writes" on one end to "the number of reads" on the other end, except that 0 writes correspond will mean 0 successful reads.
As a simple example, a single write of 4 bytes may correspond to 4 reads of 1 byte. A large single write may get segmented in such a way that the receiver will be forced to issue multiple reads to successfully receive the entire message.
Similarly, two writes, each of 4 bytes, may correspond to a single read of 8 bytes. Multiple small writes may get delivered to the receiver in a way that all of them can be retrieved in a single read.
The sockets connect using a principle called Three Way Handshake.
Once the connection is created, data can be sent back and forth between clients via the various streams.
Related
I have been developing an Android audio chatting program which behaves like a walkie talkie. After a user presses the talk button then the audio recorder starts to record what the user is saying and writes the audio bytes to a remote server through a socket. On the server side, the server socket just sends the audio bytes it received to the other client sockets.
I do not have a good way to control the behavior of these sockets. For example, to identify a client socket belongs which user? The socket does not has any field to carry the additional information other than the data it writes. So in the end, I worked out the solution is to use the same socket which transfer the audio data to transfer something like a username string. And this works well as the android client sends out a username string in cases like a client socket creates connection to server socket successfully.
The disaster happens when I try to send a username string to inform other clients who is talking when the user presses the talk button. Let me give you an example to make this clearer:
A user who's name is "user1" presses the talk button to talk.
The application sends the string "usr:user1" to the server side.
It then starts to send the audio data generated by the audio recorder.
On the server side, the server received the exact "user1" and the following audio data and resend to the other connected clients. But the problem is the client does not seem to be receiving "usr:user1" all of the time.
Here is how I check the received data:
is = socket.getInputStream();
byte[] buffer = new byte[minBufSize];
numOfReceived = is.read(buffer);
if(numOfReceived!=-1&&numOfReceived!=minBufSize){
byte[] ub = new byte[numOfReceived];
for(int i=0;i<numOfReceived;i++){
ub[i]=buffer[i];
}
String usersString = new String(ub, "UTF-8");
if(usersString.contains("hj:")){
System.out.println("current:");
final String userOfTalking=usersString.substring(3,usersString.length());
runOnUiThread(new Runnable() {
#Override
public void run() {
whoIsTalking.setText(userOfTalking+" is talking");
whoIsTalking.setVisibility(View.VISIBLE);
}
});
continue;
}
Actually, I have no idea whether the input stream contains audio data or string data. So I tried to use the return of inputstream.read() to find out how many bytes the inputstream read:
If the return number does not equal to -1 (socket closed) or the buffersize, I set in the outputstream.write, then I assume it a string.
But this is highly unreliable. For example, if I loop the command socket.getoutstream.write(buffer,0,100), then I am supposed to read a buffer 100 length from input stream. But it's not like this. I often got buffers which length are 60, or 40, or any number less than 100.
It's like the outputstream does not send exactly 100 bytes data as it declares. So my string data just mixes with the following audio data. So when the application sends the username when it just connects to the server, the others clients will receive the correct string because there is no following audio data to interfere with it.
Can you guys give me some of your opinions? Is my guessing right? How can I solve this problem? I managed to call Thread.sleep(300) after the application send the username string when the user pressed the talk button to make some room between sending the audio data in case they mix. But it does not work. Any help is much appreciated!
If I've read throug this properly... You send exactly 100 bytes, but the subsiquent read doesn't get 100, it gets less?
There can be a number of reasons for this. One is that you are not calling flush() when you write. If that's the case then you have a bug and you need to put an appropriate flush() call in your sending code.
Alternativly it could be because the OS is fragmenting the data between packets. This is unlikely for small packets (100 bytes) but very likely / necessary for large packets...
You should never rely on ALL your data turning up in a single read... you need to read multiple times to assemble all the data.
It's been quite a while since I asked this question and I am gonna give my own answer right now. Hopefully its not too late.
Actually #Philip Couling shed some very valuable insights in his answer, it helped me confirmed my guess about the cause of this issue - "the OS is fragmenting the data between packets". Thanks for his contribution again.
The approach to resolve this problem is from one of my friend. He told me that I could create a new socket in the client to connect to the same server socket to transfer some control information in string format to tell the server like who starts to talk,who stopped talking or even to allow people chatting over it. Each socket will send a string to the server to tell what they are doing and who they are belong to in the format like "audio stream: username" or "control info: username". And The server just store them into two arraylist or hashmap respectively. So every time a user presses the button to stream the audio, the corresponding control information string will be sent to the server to tell it the stream is from who and then the server redirects this information to other clients over sockets for controlling. So now we transfer the string data in a dedicated socket other than the one transferring audio stream. As a result, "The Os fragments the data" is no longer a problem because string data is too short to trigger the OS fragmenting them and also because we just send them on specific event, not as continuously as sending the audio stream.
But the new socket also brings a side effect. Because of the network delay, people may find they are still receiving the voice for a while after the application tell them someone stopped talking. The delay could be over 10 seconds in extreme network condition and may lead to strong noise if some one starts to talk during his phone is playing receiving voice.
For fixing this problem, transferring string informing in the audio socket may be the only choice to keep each side in sync. But I think we could insert some empty bytes in between the audio data and string data to make sure the string wont be mixed with other data.(empty bytes should not change the string.) However I have not tried this method yet. I will add the result after I have examined it.
I have 'n' server threads, and each one listen to 1 client.
When a server thread receives a message from its client, it needs to notify the other 'n-1' clients, and that's the reason why I keep a shared object (containing an array of 'n' sockets, one for each client) between the server threads.
Moreover, in the main server thread that holds the ServerSocket, every time I accept a new connection with a client I open a BufferedWriter/Reader to give a first answer to him using the new socket returned from ServerSocket.accept().
In case of an "OK" answer I open a new thread passing the new socket to it, in order to listen to the new client's following requests.
The problem is that i cannot close the BufferedReader and the BufferedWriter in the main server thread, because it will also close the underlying stream, causing problems to the server thread that is listening to that socket/stream.
And the question: if I open another BufferedReader (bound to the same socket) in the new thread, and then close it, will other BufferedReaders(Writers) ( specifically the ones opened in the main server thread, that i couldn't close before ) opened on the same socket be closed? Will an exception be thrown on them?
It could be possible to share the opened BufferedReader / Writer instead of the socket, to avoid instantiating every time a new object, but this is a question related to what could happen if i do things in the way described above.
Please tell me if I hadn't been clear, my english is not really good.
Closing any Reader or Writer or stream wrapped around a stream closes the wrapped stream.
Closing either the input stream or the output stream of a socket closes the other stream and the socket.
Closing the socket closes both streams.
In other words closing any of it closes all of it.
As noted in comments, multiple buffered streams/Readers/Writers wrapped around a single stream cannot work.
Multiple threads reading from/writing to the same socket is unlikely to work correctly either, unless you take great care with synchronization and buffering.
You should not do any I/O with an accepted socket in the accept loop. Otherwise you can block, which affects further clients.
You need to rethink your design.
Each Socket with an open connection to another Socket has an open InputStream and an open OutputStream. Closing either one of these streams will also close the socket. Closing a socket or its streams will not affect other sockets unless they are connected. You don't want to close any streams unless you also want to close the connection between the sockets using the streams. Please ask if there is something i missed or if you have other questions :)
I am wondering if there is a way to avoid having a TCP RST flag set as opposed to a TCP FIN flag when closing a connection in Netty where there is input data remaining in the TCP receive buffer.
The use case is:
Client (written in C) sends data packets containing many fields.
Server reads packets, encounters an error on an early field, throws an exception.
Exception handler catches the exception, writes an error message, and adds the close on write callback to the write future.
The problem is:
Remaining data in the receive buffer causes Linux (or Java..) to flag the TCP packets with the RST flag. This prevents the client from reading the data since when it gets around to trying it finds it has a read error due to the socket being closed.
With a straight Java socket, I believe the solution would be to call socket.shutdownOutput() before closing. Is there an equivalent function in Netty or way around this?
If I simply continue reading from the socket, it may not be enough to avoid the RST since there may or may not be data in the buffer exactly when close is called.
For reference: http://cs.baylor.edu/~donahoo/practical/CSockets/TCPRST.pdf
UPDATE:
Another reference and description of the problem: http://docs.oracle.com/javase/1.5.0/docs/guide/net/articles/connection_release.html
Calling shutdownOutput() should help with a more orderly closing of the connection (by sending a FIN), but if the client is still sending data then RST messages will be sent regardless (see answer from EJP. A shutdownOutput() equivalent may be available in Netty 4+.
Solutions are either to read all data from the client (but you can never be sure when the client will fully stop sending, especially in the case of a malicious client), or to simply wait before closing the connection after sending the response (see answer from irreputable).
If you can get hold of the underlying SocketChannel from Netty, which I am no expert about, you can call channel.socket().shutdownOutput().
Remaining data in the receive buffer causes Linux (or Java..) to flag
the TCP packets with the RST flag. This prevents the client from
reading the data since when it gets around to trying it finds it has a
read error due to the socket being closed.
I don't understand this. TCP guarantees that the client will receive all the data in its socket receive buffer before he gets the FIN. If you are talking about the server's socket receive buffer, it will be thrown away by the close(), and further attempts by the client to send will get an RST which becomes an IOException: connection reset', because there is no connection to associate it with and therefore nowhere to put it. NB It is TCP that does all this, not Java.
But it seems to me you should read the whole request before closing the channel if it's bad.
You could also try increasing the socket receive buffer so it is big enough to hold an entire request. That ensures that the client won't still be sending when you want to close the connection. EDIT: I see the request is megabytes so this won't work.
Can you try this: after server writes the error message, wait for 500ms, then close(). See if the client can receive the error message now.
I'm guessing that the packets in the server receive buffer have not been ACK-ed, due to TCP delayed acknowledgement. If close() is called now, the proper response for these packets is RST. But if shutdownOutput() is invoked, it's a graceful close process; the packets are ACK-ed first.
EDIT: another attempt after learning more about the matter:
The application protocol is, the server can respond anytime, even while the client request is still being streamed. Therefore the client should, assuming blocking mode, have a separate thread reading from server. As soon as the client reads a response from server, it needs to barge into the writing thread, to stop further writing to the server. This can be done by simply close() the socket.
On the server side, if the response is written before all request data are read, and close() is called afterwards, most likely RST will be sent to client. Apparently most TCP stacks send RST to the other end if close() is called when the receive buffer isn't empty. Even if the TCP stack doesn't do that, very likely more data will arrive immediately after close(), triggering RST anyway.
When that happens, the client will very likely fail to read the server response, hence the problem.
So the server can't immediately close() after response, it needs to wait till client receives the response. How does the server know that?
First, how does the client know that it has received the full response? That is, how is the response terminated? If response is terminated by TCP FIN, the server must send FIN after response, by calling shutdownOutput(). If the response is self-terminated, e.g. by HTTP Content-Length header, the server needs not to call shutdownOutput().
After the client receives the full response, per protocol, it should promptly quit sending more data to the server. This is done by crudely sever the connection; the protocol didn't design a more elegant way. Either FIN or RST is fine.
So the server, after writing the response, should keep reading from the client, till EOF or error. Then it can close() the socket.
However, there should be a timeout for this step, to account for malicious/broken clients and network problems. Several seconds should be sufficient to complete the step in most cases.
Also, the server may not want to read from the client, since it isn't free. The server can simply wait past the timeout, then close().
I have several questions-
1. I have two computers connected by socket connection. When the program executes
outputStream.writeInt(value);
outputStream.flush();
what actually happens? Does the program wait until the other computer reads the integer value?
2. How can I empty the outputStream or inputStream? Meaning, when emptying
the outputStream or inputStream, whatever is written to that stream gets removed.
(please don't suggest to do it by closing the connection!)
I tried to empty the inputStream this way-
byte[] eatup=new byte[20*1024];
int available=0;
while(true)
{
available=serverInputStream.available();
if(available==0)
break;
serverInputStream.read(eatup,0,available);
}
eatup=null;
String fileName=(String)serverInputStream.readObject();
Program should not process the line as nothing else is being written on the outputStream .
But my program executes it anyway and throws a java.io.OptionalDataException error.
Note: I am working on a client-server file transfer project. The client sends files to
the server. The second code is for server terminal. If 'cancel button' is pressed on server
end then it stops reading bytes from the serverInputStream and sends a signal(I used int -1)
to the client. When client receieves this signal it stops sending data to the server, but I've
noticed that serverInputStream is not empty. So I need to empty this serverInputStream so that
the client computer is able to send the server computer files again(That's why I can't manage a lock
from read method)
1 - No. On the flush() the data will be written to the OS kernel which will likely immediately hand it to the network card driver, which in turn will send it to the receiving end. In a nutshell the send is fire and forget.
2 - As Jeffrey commented available() is not reliable for this sort of operation. If doing blocking IO then as he suggests you should just use read() speculatively. However it should be said that you really need to define a protocol on top of the raw streams, even if it's just using DataInput/DataOutputStream. When using raw write/read the golden rule is one write != one read. For example if you were to write 10 bytes on one side and had a reading loop on the other there is no guarantee that one read will read all 10 bytes. It may be "read" as any combination of chunks. Similarly two writes of 10 bytes might appear as one read of 20 bytes on the receiving side. Put another way there is no concept of a "packet" unless you create a higher level protocol on top of the raw bytes to do packets. An example would be each send is prefixed by a byte length so the receiving side knows how much data to expect in the current packet.
If you do need to do anything more complicated than a basic apps I strongly encourage you to investigate some higher level libraries that have solved many of the gnarly issues of network IO. I would recommend Netty which I use for production apps. However it is quite a big leap in understanding from a simple IO stream to Netty's more event based system. There may be other libraries somewhere in the middle.
I have a Java socket server and the the connection socket working just fine. What I need help with is streaming a response back to the client.
I get the output stream with socket.getOutputStream(). How can I make it so that when I write to the output stream it is immediately sent, but in the future on the same connection I can send another chunk of data.
I tried simply using write and write in conjunction with flush, but I don't really know what I am doing...
Depending on native implementation, the socket may have a buffer, and not send the bytes the second you call write(). flush() however, will force the bytes to be sent. Typically it is good practice to send larger chunks rather than byte by byte (for streaming you generally start by building up a buffer on the receiver's side). Optimal network usage is likely to be to send as large packets as possible (limited by MTU). To have a local buffer in java, wrap the socket outputstream in a BufferedOutputStream.
flush() will force the data to be sent to the OS. The OS can buffer the data and so can the OS on the client. If you want the OS to send data earlier, I suggest you try turning Nagle's Algorithm off. socket.setTcpNoDelay(true); However, you will find that OS/driver parameters can still introduce some buffering/packet coelesing.
If you look at Sun's JDK 6 java.net.SocketOutputStream you will see the flush() method does nothing. This is not guarenteed to be the case on all platforms and a flush() may be required.
Another solution could be DataOutputStream
DataOutputStream dataOut = new DataOutputStream(socket.getOutputStream());
dataOut.writeInt(1)