I'm working on a network app written in Java, using ObjectOutputStream and ObjectInputStream on top of Sockets to exchange messages. My code looks like this:
Sender:
ObjectOutputStream out;
ObjectInputStream in;
try{
Socket socket=new Socket(address, port);
socket.setSoLinger(true, socketLingerTime);
out=new ObjectOutputStream(socket.getOutputStream());
out.writeObject(message);
out.flush();
out.close();
}catch (variousExceptions)...
Receiver:
Object incoming;
try{
incoming=myObjectInputStream.readObject();
}catch (SocketException socketError)
{
if (socketError.getMessage().equals("Connection reset"))
{
//this is the exception I get
}
}
Sometimes the message goes through ok, but other times I get the marked exception instead of an object. Isn't flush supposed to force the message through to the other side? Am I somehow using the function incorrectly? Or is this some sort of bug in the underlying Java/OS network code?
Thanks!
UPDATE:
I've done some more snooping on this, and it seems to only happen when the system's resources are being taxed by something. I've not been able to replicate it outside the VirtualBox, but that could just be because the VirtualBox doesn't have many resources to begin with. I'll keep this question updated as I look into it further.
It turns out the issue was caused by Nagle's Algorithm; the output buffer is within the OS, so it wasn't affected by flush. The solution is to turn Nagle's Algorithm off using Socket.setTcpNoDelay(true), and buffer messages at the user level using BufferedOutputStream.
For my case, it's a silly problem but wasting me 4 hours.
Just have to use outStream.writeln(""); or outStream.write(mess + "\n");
Since reader.readLine() reads until it finds '\n' character. So write() alone won't work.
You should be able to send one object per connection.
To ensure resources are cleaned up in an orderly manner it is best to close the socket as well as the output stream.
close() will call flush so it should be redundant.
What happens if you don't set the SO Linger?
What is the actual exception you are getting?
It sounds like a firewall in one of the routers in the path from client to server is sending an RST for some reason. I don't believe there's anything wrong with your code. I tried to replicate the problem, but couldn't.
Connection resets can be caused by writing to a connection that is already closed at the other end. Detection can occur at the next I/O or a subsequent one, e.g. a read. In other words it can be caused by a bug in your application protocol. SO_LINGER won't help, don't mess with this.
Related
Hello stack overflow world, I've been struggling with the most straight forward and common problem within Java IO, for some time, and now need your help to tackle it.
Check out this piece of code I have in a try block, within a thread.run():
// connect to client socket, and setup own server socket
clientSocket = new Socket(serverHostname, CLIENT_PORT);
//send a test command to download a file
String downloadFileName = "sample.txt";
DataOutputStream dataOutputStream = new DataOutputStream(clientSocket.getOutputStream());
System.out.println("Sending a request to download file : " + downloadFileName + " from user: Arsa node"); //todo: replace with node user later
dataOutputStream.writeUTF("D/sample.txt");
//close socket if host isn't detected anymore, and if socket doesn't become null suddenly
dataOutputStream.flush();
dataOutputStream.close();
System.out.println("****File has been sent****");
in = new DataInputStream(clientSocket.getInputStream());
byte[] retrievedFileData = new byte[8036];
if (in.readInt() > 0) {
System.out.println("Starting file download!");
in.read(retrievedFileData);
System.out.println("File data has been read, converting to file now");
//closing input stream will close socket also
in.close();
}
clientSocket.close();
2 Main questions that have been confusing me to death:
Why does dataOutputStream.close() need to be run for writeUTF to actually send my string to the server socket, I find that when I don't have dos.close(), data isn't retrieved on the other side, further because I close it, I no longer can read from the socket - as it seems the socket connection becomes closed when the Output Stream is previously closed...
What's a better way, following some sort of pattern to do this? For context, all I'm trying to do is write the filename I'm looking to download to my client, then read the response right away, which I expect to be bytes with the file, any error handling I will consider as a part of my development.
Overall, it shouldn't be complicated to write something to a socket, then read and ingest it's response...which doesn't seem to be the case here,
any help would be greatly appreciated! If the ServerSocket code snippet is needed I'm happy to share.
The observed behavior is just a side-effect of close(), as it calls flush() before closing to make sure any buffered data is sent. To solve your problem, you need to call the flush() method instead of closing.
This behavior is not unique to DataOutputStream: a lot of other OutputStream (or Writer) implementations apply buffering, and you will need to flush when you want to ensure the data is sent to the client, written to disk or otherwise processed.
BTW: The DataOutputStream and DataInputStream is for a very specific type of data serialization protocol that is particular to Java. You may want to consider carefully if this is the right protocol to use.
I am running into some issues with the Java socket API. I am trying to display the number of players currently connected to my game. It is easy to determine when a player has connected. However, it seems unnecessarily difficult to determine when a player has disconnected using the socket API.
Calling isConnected() on a socket that has been disconnected remotely always seems to return true. Similarly, calling isClosed() on a socket that has been closed remotely always seems to return false. I have read that to actually determine whether or not a socket has been closed, data must be written to the output stream and an exception must be caught. This seems like a really unclean way to handle this situation. We would just constantly have to spam a garbage message over the network to ever know when a socket had closed.
Is there any other solution?
There is no TCP API that will tell you the current state of the connection. isConnected() and isClosed() tell you the current state of your socket. Not the same thing.
isConnected() tells you whether you have connected this socket. You have, so it returns true.
isClosed() tells you whether you have closed this socket. Until you have, it returns false.
If the peer has closed the connection in an orderly way
read() returns -1
readLine() returns null
readXXX() throws EOFException for any other XXX.
A write will throw an IOException: 'connection reset by peer', eventually, subject to buffering delays.
If the connection has dropped for any other reason, a write will throw an IOException, eventually, as above, and a read may do the same thing.
If the peer is still connected but not using the connection, a read timeout can be used.
Contrary to what you may read elsewhere, ClosedChannelException doesn't tell you this. [Neither does SocketException: socket closed.] It only tells you that you closed the channel, and then continued to use it. In other words, a programming error on your part. It does not indicate a closed connection.
As a result of some experiments with Java 7 on Windows XP it also appears that if:
you're selecting on OP_READ
select() returns a value of greater than zero
the associated SelectionKey is already invalid (key.isValid() == false)
it means the peer has reset the connection. However this may be peculiar to either the JRE version or platform.
It is general practice in various messaging protocols to keep heartbeating each other (keep sending ping packets) the packet does not need to be very large. The probing mechanism will allow you to detect the disconnected client even before TCP figures it out in general (TCP timeout is far higher) Send a probe and wait for say 5 seconds for a reply, if you do not see reply for say 2-3 subsequent probes, your player is disconnected.
Also, related question
I see the other answer just posted, but I think you are interactive with clients playing your game, so I may pose another approach (while BufferedReader is definitely valid in some cases).
If you wanted to... you could delegate the "registration" responsibility to the client. I.e. you would have a collection of connected users with a timestamp on the last message received from each... if a client times out, you would force a re-registration of the client, but that leads to the quote and idea below.
I have read that to actually determine whether or not a socket has
been closed data must be written to the output stream and an exception
must be caught. This seems like a really unclean way to handle this
situation.
If your Java code did not close/disconnect the Socket, then how else would you be notified that the remote host closed your connection? Ultimately, your try/catch is doing roughly the same thing that a poller listening for events on the ACTUAL socket would be doing. Consider the following:
your local system could close your socket without notifying you... that is just the implementation of Socket (i.e. it doesn't poll the hardware/driver/firmware/whatever for state change).
new Socket(Proxy p)... there are multiple parties (6 endpoints really) that could be closing the connection on you...
I think one of the features of the abstracted languages is that you are abstracted from the minutia. Think of the using keyword in C# (try/finally) for SqlConnection s or whatever... it's just the cost of doing business... I think that try/catch/finally is the accepted and necesary pattern for Socket use.
I faced similar problem. In my case client must send data periodically. I hope you have same requirement. Then I set SO_TIMEOUT socket.setSoTimeout(1000 * 60 * 5); which is throw java.net.SocketTimeoutException when specified time is expired. Then I can detect dead client easily.
I think this is nature of tcp connections, in that standards it takes about 6 minutes of silence in transmission before we conclude that out connection is gone!
So I don`t think you can find an exact solution for this problem. Maybe the better way is to write some handy code to guess when server should suppose a user connection is closed.
As #user207421 say there is no way to know the current state of the connection because of the TCP/IP Protocol Architecture Model. So the server has to notice you before closing the connection or you check it by yourself.
This is a simple example that shows how to know the socket is closed by the server:
sockAdr = new InetSocketAddress(SERVER_HOSTNAME, SERVER_PORT);
socket = new Socket();
timeout = 5000;
socket.connect(sockAdr, timeout);
reader = new BufferedReader(new InputStreamReader(socket.getInputStream());
while ((data = reader.readLine())!=null)
log.e(TAG, "received -> " + data);
log.e(TAG, "Socket closed !");
Here you are another general solution for any data type.
int offset = 0;
byte[] buffer = new byte[8192];
try {
do {
int b = inputStream.read();
if (b == -1)
break;
buffer[offset++] = (byte) b;
//check offset with buffer length and reallocate array if needed
} while (inputStream.available() > 0);
} catch (SocketException e) {
//connection was lost
}
//process buffer
Thats how I handle it
while(true) {
if((receiveMessage = receiveRead.readLine()) != null ) {
System.out.println("first message same :"+receiveMessage);
System.out.println(receiveMessage);
}
else if(receiveRead.readLine()==null)
{
System.out.println("Client has disconected: "+sock.isClosed());
System.exit(1);
} }
if the result.code == null
On Linux when write()ing into a socket which the other side, unknown to you, closed will provoke a SIGPIPE signal/exception however you want to call it. However if you don't want to be caught out by the SIGPIPE you can use send() with the flag MSG_NOSIGNAL. The send() call will return with -1 and in this case you can check errno which will tell you that you tried to write a broken pipe (in this case a socket) with the value EPIPE which according to errno.h is equivalent to 32. As a reaction to the EPIPE you could double back and try to reopen the socket and try to send your information again.
is there a way of knowing when or whether the flush() method of a BufferedOutputStream thread has finished successfully? In my case I'm using it for sending a simple string through a java.net.Socket. In the following code, the flush() method is run in parallel with the BufferedReader.read() method and the socket output is immediately blocked by the input read resulting in something that resembles a deadlock. What I would like to do is wait for the output to end, and then start reading the input.
Socket sk = new Socket("192.168.0.112", 3000);
BufferedOutputStream bo = new BufferedOutputStream(sk.getOutputStream());
bo.write(message.getBytes());
bo.flush();
BufferedReader br = new BufferedReader(new InputStreamReader(sk.getInputStream()));
String line = br.readLine();
if (line.equals("ack")) {
System.out.println("ack");
}
sk.close();
Update
ServerSocket:
ServerSocket ss = new ServerSocket(3000);
System.out.println("server socket open");
while (true) {
Socket sk = ss.accept();
System.out.println("new connection");
BufferedReader br = new BufferedReader(new InputStreamReader(sk.getInputStream()));
String line = br.readLine();
System.out.println("received line: " + line);
BufferedOutputStream bo = new BufferedOutputStream(sk.getOutputStream());
bo.write("ack".getBytes()); bo.flush();
sk.close();
}
Update:
#Global Variable - the reason that read was blocking the socket is that it was waiting for the \n, indeed. Using
bo.write("ack\n".getBytes());
instead of
bo.write("ack".getBytes());
made it work.
Regarding the initial question, is there a way of knowing if flush() method has finished successfully, #Stephen C provided the answer:
there is no way to know that based on the Socket or OutputStream APIs.
The normal way to get that sort of assurance is to have the remote
application send an "reply" in response, and read it in the local
side.
This "reply" is implemented in the code sample and it works.
Is there a way of knowing when or whether the flush() method of a BufferedOutputStream thread has finished successfully?
It depends on what you mean by "finished successfully".
The flush() method ensures that all unsent data in the pipeline has been pushed as far as the operating system network stack. When that is done, then you could say that flush() has finished successfully. The way that you know that that has happened is that the flush() call returns.
On the other hand, if you want some assurance that the data has (all) been delivered to the remote machine, or that the remote application has read it (all) ... there is no way to know that based on the Socket or OutputStream APIs. The normal way to get that sort of assurance is to have the remote application send an "reply" in response, and read it in the local side.
In the following code, the flush() method is run in parallel with the BufferedReader.read() method and the socket output is immediately blocked by the input read resulting in something that resembles a deadlock.
The code that you are talking about is basically the correct approach. The way to wait for the response is to read it like that.
If it is not working, then you need to compare what the client and server side are doing:
Is the server waiting for the client to send something more? Maybe an end of line sequence?
Did the server sends the response?
Did it flush() the response?
A mismatch between what the client and server are doing can lead to a form or deadlock, but the solution is to fix the mismatch. Waiting for some kind of hypothetical confirmation of the flush() is not the answer.
UPDATE
The problem is indeed a mismatch. For example, the server writes "ack" but the client expects "ack\n". The same happens in the client -> server case ... unless message always ends with a newline.
Your code is reading reader.readLine() . Are your writing \n when writing? You may want to append \n to the string your are writing.
I tried to reproduce your problem. First, I ran in to some kind of blocking state too, until I realized, I was using readLine at Server-side, too. But the message I was sending did not have a concluding \n. Therefore, the serversocket was still waiting at its InputStream without sending the client the ACK through its OutputStream. I think, #Global Variable is right.
I'm trying to invoke a method from another class that means I want to use serialization I make an object of method name and it's parameters and write it on a socket but when I want to make ObjectOutputStream I encounter with error "connection reset by peer: socket write error"
I searched for the possible reasons but I couldn't find any suitable answer
in the server side I didn't close the sockets or I didn't do any work to close that I don't know what happens then :-??
in the line:
ObjectOutputStream oos = (new ObjectOutputStream(os));
and my piece of code is this:
InvocationVO invo = new InvocationVO("showStart", treasure, round);
for (int i = 0; i < numPlayer; i++) {
OutputStream os = socket.get(i).getOutputStream();
ObjectOutputStream oos = (new ObjectOutputStream(os)); // this has error
oos.writeObject(invo);
oos.close();
os.close();
Client.getClients()[i].invoke();
}
thanks for your helps in advance!
You are writing to a connection that has already been closed by the peer. I find it hard to believe that didn't turn up in your search. The cause of the problem is firstly that you are closing oos, and therefore the socket, in this code, so (a) it won't run the second time, and (b) closing the socket causes the peer to get an EOS condition and close the socket, so (c) the second time you run this code you will run into at least two problems.
There is a third problem you haven't hit yet. You are creating a new ObjectOutputStream every time you run this code rather than using the same one for the life of the socket. Same goes for ObjectInputStream wherever you use it too. If you do what you are doing here you are liable to run into StreamCorruptedException: invalid type code.
I have an application that does a lot work on S3, mostly downloading files from it. I am seeing a lot of these kind of errors and I'd like to know if this is something on my code or if the service is really unreliable like this.
The code I'm using to read from the S3 object stream is as follows:
public static final void write(InputStream stream, OutputStream output) {
byte[] buffer = new byte[1024];
int read = -1;
try {
while ((read = stream.read(buffer)) != -1) {
output.write(buffer, 0, read);
}
stream.close();
output.flush();
output.close();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
This OutputStream is a new BufferedOutputStream( new FileOutputStream( file ) ). I am using the latest version of the Amazon S3 Java client and this call is retried four times before giving up. So, after trying this for 4 times it still fails.
Any hints or tips on how I could possibly improve this are appreciated.
I just managed to overcome a very similar problem. In my case the exception I was getting was identical; it happened for larger files but not for small files, and it never happened at all while stepping through the debugger.
The root cause of the problem was that the AmazonS3Client object was getting garbage collected in the middle of the download, which caused the network connection to break. This happened because I was constructing a new AmazonS3Client object with every call to load a file, while the preferred use case is to create a long-lasting client object that survives across calls - or at least is guaranteed to be around during the entirety of the download. So, the simple remedy is to make sure a reference to the AmazonS3Client is kept around so that it doesn't get GC'd.
A link on the AWS forums that helped me is here: https://forums.aws.amazon.com/thread.jspa?threadID=83326
The network is closing the connection, prior to the client getting all the data, for one reason or another, that's what is going on.
Part of any HTTP Request is the content length, Your code is getting the header, saying hey buddy, here's data, and its this much of it.. and then the connection is dropping before the client has read all of the data.. so its bombing out with the exception.
I'd look at your OS/NETWORK/JVM connection timeout settings (though JVM generally inherit from the OS in this situation). The key is to figure out what part of the network is causing the problem. Is it your computer level settings saying, nope not going to wait any longer for packets.. is it that you are using a non blocking read, which has a timeout setting in your code, where it is saying, hey, haven't gotten any data from the server since longer than I'm supposed to wait so I'm going to drop the connection and exception. etc etc etc.
Best bet is to low level snoop the packet traffic and trace backwards, to see where the connection drop is happening, or see if you can up timeouts in things you can control, like your software, and OS/JVM.
First of all, your code is operating entirely normally if (and only if) you suffer connectivity troubles between yourself and Amazon S3. As Michael Slade points out, standard connection-level debugging advice applies.
As to your actual source code, I note a few code smells you should be aware of. Annotating them directly in the source:
public static final void write(InputStream stream, OutputStream output) {
byte[] buffer = new byte[1024]; // !! Abstract 1024 into a constant to make
// this easier to configure and understand.
int read = -1;
try {
while ((read = stream.read(buffer)) != -1) {
output.write(buffer, 0, read);
}
stream.close(); // !! Unexpected side effects: closing of your passed in
// InputStream. This may have unexpected results if your
// stream type supports reset, and currently carries no
// visible documentation.
output.flush(); // !! Violation of RAII. Refactor this into a finally block,
output.close(); // a la Reference 1 (below).
} catch (IOException e) {
throw new RuntimeException(e); // !! Possibly indicative of an outer
// try-catch block for RuntimeException.
// Consider keeping this as IOException.
}
}
(Reference 1)
Otherwise, the code itself seems fine. IO exceptions should be expected occurrences in situations where you're connecting to a fickle remote host, and your best course of action is to draft a sane policy to cache and reconnect in these scenarios.
Try using wireshark to see what is happening on the wire when this happens.
Try temporarily replacing S3 with your own web server and see if the problem persists. If it does it's your code and not S3.
The fact that it's random suggests network issues between your host and some of the S3 hosts.
Also S3 could close slow connections according to my experience.
I would take a very close look at the network equipment nearest your client app. This problem smacks of some network device dropping packets between you and the service. Look to see if there was a starting point when the problem first occurred. Was there any change like a firmware update to a router or replacement of a switch around that time?
Verify your bandwidth usage against the amount purchased from your ISP. Are there times of the day where you're approaching that limit? Can you obtain graphs of your bandwidth usage? See if the premature terminations can be correlated with high-bandwidth usage, particularly if it approaches some known limit. Does the problem seem to pick on smaller files and on large files only when they're almost finished downloading? Purchasing more bandwidth from your ISP may fix the problem.