Controlling java socket's connection - java

I'm writing a client-server application, using java TCP sockets.
Client and server are connected by a socket.
Sometimes server has to write a reply message for the client on this socket.
But in that moment, client's socket could be closed, not using close() method, but closing client's application.
Can you tell me, how server can recognize this situation, and avoid writing his reply message on this socket?

This is impossible to do reliably. If you establish that a connection is open, by the time you get around to writing to it, it may have been closed. The reliable solution is to attempt the write, and handle any errors that may result.
Note that if you do get an error indication, there is no saying how much data got to the remote peer. If you perform two writes, and the second write gets an error indication, it is quite possible that the remote peer shut down before the first write but the local peer only noticed it during the second write.

Related

java help understanding how socket connections work

I am completely new to creating a network connection in java so I apologize if this is a stupid question.
I am trying to create a D&D companion in java that will allow a player to create their character and then send it to the DM so that they can view it and make changes and send it back to the player. I want to be able to make it so that any time a field is changed on one computer it will also be changed on the other computer.
After a bunch of research online I have been able to create a socket connection between the DM(server) and the player(client) and pass a message between the two but I am not sure how a socket connection works after this initial connection is made. My research has not been very clear on this. I have found many resources that have said that java closes the socket after a message has been passed and many that say that the socket stays open.
If java closes the socket then my problem is easy enough to solve because then I will just have to open a new socket every time I need to pass data making sure that I pass the IP address of the client to the server the first time I make a connection.
My real questions come in when a socket stays open.
If the socket stays open and multiple clients connect to the server, will the server just shout over the network whenever it transmits a message so that all clients receive the message? (If this is the case then I know I can just attach a username to the front of the message so that the client can determine if the server is talking to it.)
If the server does not shout then how do I specify which client I want the server to talk to?
Will I have to add a loop to my receive methods so that the client/server is constantly listening for a transmission from the server/client or will java automatically do so after I run the method the first time?
I have found many resources that have said that java closes the socket after a message has been passed
You found them where?
and many that say that the socket stays open.
All those are correct. Java never closes connections. The application closes connections.
If java closes the socket then my problem is easy enough to solve because then I will just have to open a new socket every time I need to pass data making sure that I pass the IP address of the client to the server the first time I make a connection.
It doesn't.
My real questions come in when a socket stays open.
If the socket stays open and multiple clients connect to the server, will the server just shout over the network whenever it transmits a message so that all clients receive the message?
No. It will respond via the socket that is connected to the corresponding client.
(If this is the case then I know I can just attach a username to the front of the message so that the client can determine if the server is talking to it.)
Unnecessary.
If the server does not shout then how do I specify which client I want the server to talk to?
The server responds via the same socket it read the request from.
Will I have to add a loop to my receive methods so that the client/server is constantly listening for a transmission from the server/client
No, you will have to add a thread per accepted socket, that loops reading requests until end of stream.
or will java automatically do so after I run the method the first time?
No.
You seem to have been reading some truly appalling drivel. Take a look at the Custom Networking section of the Java Tutorial.
Adding to EJP's wise answer, it might be worth clarifying:
Sounds like you (wisely) use TCP, so your Socket represents a connection between 1 server and 1 client. No "shouting". In examples such as this , when connection is established (namely, client obtains a Socket by calling "new Socket" and server obtains a Socket by calling "accept"), those Sockets are dedicated to those 2 specific endpoints. So if 10 clients connect to 1 server, the server will keep 10 Sockets and won't mix them up. A bit like a poor secretary that has 10 phones on his desk and answers them all - despite the mess, each earpiece is clearly connected to 1 customer.
The connection can hold for a while & serve several messages. It will terminate when either one of the sides calls 'socket.close', or it can be terminated by underlying 3rd parties (operating system, proxies, firewalls).
For your first version, or for simple business requirements, it's probably enough to converse over this 1 simple connection. However, for commercial critical data that requires 'assurance of delivery', you might need to invest some careful thought & possibly tools such as RabbitMQ.
Good luck:)

Netty how to count actual written to socket bytes and take last write time?

I'm writing highly loaded client/server application. There are cases on some OSes, when connection is lost, but netty doesn't know about it (due to TCP/IP protocol doesn't have pinging). So I decided to implement connection pinging on my app level.
Then I've faced the next problem: ping from server can not reach client and back during reasonable time in cases when server sends too much messages to the client via slow network connection (write buffer high water mark is rather big, several MB). In this case server breaks connection despite its alive and working.
So I've decided to look on IO processing while pinging as well. So I could consider as normal the next situation: when ping is timed out, but bytes from server are still being processed and written to the socket.
However, looks like its impossible in netty to count actual written bytes to socket and measure last to socket write time, because NioSocketChannel.doWrite(ChannelOutboundBuffer in) doesn't have any callbacks for that. And I don't want to hack the netty code by overwriting somehow NioSocketChannel doWrite method.
I'm using netty 4.0.42.
Any help is appreciated!

Netty - Does closing the socket at the client end close the channel at the server

My TCP server is implemented using Netty. My client using vanilla java.net.Socket to connect to this server. I'm using the same socket to send multiple requests to the server. Once done with all the requests the client calls socket.close().
I'm not closing the channel anywhere in my server code. Also, I've set TCP KEEP_ALIVE on my server. Will closing the socket on the client end automatically close the channel on the server or do I've to do something else explicitly and what is the best practice ?
Usually, if an application closes a socket, its remote peer also notices that the closure. Therefore, you don't need to call close() on both side. However, sometimes, due to network problems, you might not get notified when the remote peer closes the connection. To work around this problem, it's a good idea to send some message periodically, and then you will detect the unexpected closure sooner.
Please note SO_KEEP_ALIVE will not help much here because for most operating systems because the default keep alive time is very long.

C# / Java Socket Connection Lost?

I am making a C# client / Java server chatroom, and everything works fine now, except for one thing:
After some time (an hour or so) of not using the application (or using it, I don't know) it gives me a SocketException at the C# client at the socket.EndReceive() function:
A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
Do Java or C# socket connections closes after some time of idling? Or is it just the tcp protocol?
What would be the best method to fix it?
Thanks all!
Bas
Do Java or C# socket connections closes after some time of idling?
No.
However, firewalls and especially NAT gateways do, often silently.
What would be the best method to fix it?
Implement a heartbeat procedure. i.e. the client and/or server periodically sends (e.g. every 10 or 30 or so seconds) a special message that's just used to keep the connection alive and to faster detect a failed peer.

How to identify a broken socket connection in Java immediately?

I have a typical java client and a server. The client sends some request to the server and waits for the response. The client reads up to say 100 bytes of data from the contained input stream into an array of bytes. It waits for the complete response of 100 bytes to be read within a specified timeout period of say 3 secs. The problem here is to identify if the server went down or crashed while/before writing the response. Basically, we need to identify if the socket was broken or the peer disconnected for some reason. Is there a way to identify this?
How to identify a broken socket connection in Java immediately?
You can't detect it immediately, in Java or any other language. TCP/IP doesn't know, so Java can't know. The only sure way to detect a broken TCP connection is by writing to it and catching IOExceptions, and they won't happen immediately.
The best way to identity the connection is down is to timeout the connection. i.e. you expect a response in a given amount of time and flag if that response does not come as you expect.
When you have a graceful disconnection (.e.g the other end calls close()) the read on the connection will let you know once the buffer has been drained.
However, if there some other type of failure, you might not be notified until the OS times out the connection (e.g. after 3 minutes) and indeed, you may want to keep the connection. e.g. if you pull the network cable out for 10 seconds and put it back in, that doesn't need to be a failure.
EDIT: I don't believe its a good idea to be too aggressive in automatically handling connection/service "failures". This is usually better handled by a planned fix to the system, based on investigation of the true cause. e.g. increased bandwidth, redundant connectivity, faster servers, code fixes.
If connection is broken abnormally, you will receieve IOException when reading; that normally happens quite fast, but there is no guarantees about time - all depends on the OS, network hardware, etc. If remote end gracefully closes the socket, you'll read -1 as next byte.
Assuming everything else works, if the remote peer - the TCP server - was killed then the TCP client will normally receive a TCP RST (reset) and you'll get an IOException in your client application.
However, there are lots of other things that can go wrong besides a process being killed. Basically anything on the network path between the two processes: a cable is yanked, a router dies, a firewall dies, etc. All of this will not immediately be detected.
For the above reasons the general rule is - as pointed out in the answer from EJP - that a broken connection can only be detected by writing to it. This is why it is always recommended that a TCP client and TCP server exchange some type of heartbeat messages at regular intervals. There are different ways to do this. I like best the method where the TCP client will - in the absence of data being received from the TCP server - send a heartbeat message to the server and expect a reply back within a certain time period. This way heartbeat messages will only be sent when really needed.
A sub-optimal approach - if you cannot implement true heartbeating - is to always read with a timeout. Set the timeout on the socket and then catch java.net.SocketTimeoutException. This will allow you to know that no data has been received on socket during x milliseconds.
It should be mentioned that there's one scenario where you don't have to use heartbeating, nor using the socket timeout: if the TCP client and the TCP server communicate over a loopback interface then a broken connection will always be propagated to both the TCP client application and the TCP server application. This is because, in this case, there's really no network infrastructure between the two processes. So if you have an existing application which isn't well-designed with respect to its TCP communication (i.e. it doesn't implement some form of heartbeating or at least reading with a timeout), then as a last resort you may 'fix' the problem by moving the two application onto the same host and let them communicate over the loopback interface.

Categories