Detect closed client UDP channel in Netty 4 - java

I'm using Netty as part of a UDP server which opens a number of persistent connections with various clients on a single server channel. I need to perform some cleanup when a client terminates the connection. Specifically, I need to know when the connection is terminated and from which IP address the terminating client is located at so that I can cleanup the data.
Perhaps I'm misunderstanding the Netty model, but I'm unsure how to detect this. I normally distinguish between various clients by checking the sender's IP address from the DatagramPacket instances, perhaps that is wrong and I should be using multiple channels or something like that? Either way, I'm looking for some way to handle a closed connection.

You're misunderstanding UDP, not Netty. There is no such thing as a "persistent connection" in UDP, although there can be application-managed sessions, and UDP provides no information about whether the other endpoint is still active. Ideally, you'll have a "close" message, but you'll need to have a timeout or other mechanism to identify dead connections and clean them up (for crashed clients, etc.). This will require some sort of garbage-collection process, maybe a background thread that checks a "last heard from" timestamp and closes inactive sessions.

Related

Apache Mina, How to detect when you're sending messages using an invalid socket to the client side?

I have a server setup using MINA version 2.
I don't have much experience with sockets and tcp.
The problem is if I make a connection to my server, and then unplug my internet and close the connection, (Server doesn't get notification of the connection being closed) the server will forever think that my connection is still active and valid.
The server will continue to send messages to my connection, and doesn't throw any exceptions even though there is nothing on my computer binded to the local port.
How can I test that the connection still exists?
I've tried running MINA logging in debug mode, and logging the
IoSession.isConnected() IoSession.isActive IoSession.isClosing
They always return true, true, false. Also, in debug mode, there was no useful information stating that the connection was lost. It just logged the regular "sent message" stuff, as if there was nothing wrong.
From using Flash actionscript, I have had experiences where flash will throw errors that it's operating on an invalid socket. That leads me to believe that it's saying the socket on the server is no longer valid for the connection. So in other words if flash can detect invalid sockets, a Java server should be able to detect it too correct?
If there is truly no way to detect dead connections, I can always setup a connection keep alive routine where the client is constantly sending an "I'm here" message to the server, and the server closes sessions that havent had an incoming message for a period of seconds.
EDIT: After learning that "sockets" are private and never shared over the network I managed to find better results for my issue and I found this SO thread.
Java socket API: How to tell if a connection has been closed?
Unfortunately
IOException 'Connection reset by peer' Doesn't occur when I write to
the IoSession in MINA.
Edit:
Is there any way at all in Java to detect when an ACK to a TCP packet was not received after sending a packet? An ACK Timeout?
Edit:
Yet apparantly, my computer should send a RST to the server? According to this answer. https://stackoverflow.com/a/1434592/4425643
But that seems like a bad way of port scanning. Is this how port scanning works? Port scanners send data to a port and the victim's service responds with a RST? Sorry I think I need a new question for all this. But it's odd that MINA doesn't throw connection reset by peer when it sends data. So then my computer doesn't send a RST.
The concept of socket or connection in Internet protocols is an illusion. It's a convenient abstraction that is provided to you by the operating system and the TCP stack, but in reality, it's all fake.
Under the hood, everything on the Internet takes the form of individual packets.
From the perspective of a computer sending packets to another computer, there is no built-in way to know whether that computer is actually receiving the packets, unless that computer (or some other computer in between, like a router) tells you that the packets were, or were not, received.
From the perspective of a computer expecting to receive packets from another computer, there is no way to know in advance whether any packets are coming, will ever come, or in what order -- until they actually arrive. And once they arrive, just the fact that you received one packet does not mean you'll receive any more in the future.
That's why I say connections or sockets are an illusion. The way that the operating system determines whether a connection is "alive" or not, is simply by waiting an arbitrary amount of time. After that amount of time -- called a timeout -- if one side of the TCP connection doesn't hear back from the other side, it will just assume that the other end has been disconnected, and arbitrarily set the connection status to "closed", "dead" or "terminated" ("timed out").
So:
Your server has no clue that you've pulled the plug on your Internet connection. It has no way of knowing that.
Your server's TCP stack has been configured a certain way to wait an arbitrary amount of time before "giving up" on the other end if no response is received. If this timeout is set to a very large period of time, it may appear to you that your server is hanging on to connections that are no longer valid. If this bothers you, you should look into ways to decrease the timeout interval.
Analogy: If you are on a phone call with someone, and there's a very real risk of them being hurt or killed, and you are talking to them and getting them to answer, and then the phone suddenly goes dead..... Well, how long do you wait? At what point do you assume the other person has been hurt or killed? If you wait a couple milliseconds, in most cases that's too short of a "timeout", because the other person could just be listening and thinking of how to respond. If you wait for 50 years, the person might be long dead by then. So you have to set a reasonable timeout value that makes sense.
What you want is a KeepAlive, heartbeat, or ping.
As per #allquicatic's answer, there's no completely reliable built-in method to do this in TCP. You'll have to implement a method to explicitly ask the client "Are you still there?" and await an answer for a specified amount of time.
https://en.wikipedia.org/wiki/Keepalive
A keepalive (KA) is a message sent by one device to another to check that the link between the two is operating, or to prevent this link from being broken.
https://en.wikipedia.org/wiki/Heartbeat_(computing)
In computer science, a heartbeat is a periodic signal generated by hardware or software to indicate normal operation or to synchronize other parts of a system.[1] Usually a heartbeat is sent between machines at a regular interval in the order of seconds. If a heartbeat isn't received for a time—usually a few heartbeat intervals—the machine that should have sent the heartbeat is assumed to have failed.[2]
The easiest way to implement one is to periodically send an arbitrary piece of data - e.g. a null command. A properly programmed TCP stack will timeout if an ACK is not received within its specified timeout period, and then you'll get a IOException 'Connection reset by peer'
You may have to manually tune the TCP parameters, or implement your own functionality if you want more fine-grained control than the default timeout.
The TCP framework is not exposed to Java. And Java does not provide a means to edit TCP configuration that exists on the OS level.
This means we cannot use TCP keep alive in Java efficiently because we can't change its default configuration values. Furthermore we can't set the timeout for not receiving an ACK for a message sent. (Learn about TCP to discover that every message sent will wait for an ACK (acknowledgement) from the peer that the message has been successfully delivered.)
Java can only throw exceptions for cases such as a timeout for not completing the TCP handshake in a custom amount of time, a 'Connection Reset by Peer' exception when a RST is received from the peer, and an exception for an ACK timeout after whatever period of time that may be.
To dependably track connection status, you must implement your own Ping/Pong, Keep Alive, or Heartbeat system as #Dog suggested in his answer. (The server must poll the client to see if it's still there, or the client has to continuosly let the server know it's still there.)
For example, configure your client to send a small packet every 10 seconds.
In MINA, you can set a session reader idle timeout, which will send an event when a session reader has been idle for a period of time. You can terminate that connection on delivery of this event. Setting the reader timeout to be a bit longer than the small packet interval will account for random high latency between the client and server. For example, a reader idle timeout of 15 seconds would be lenient in this case.
If your server will rarely experience session idling, and you think you can save bandwidth by polling the client when the session has gone idle, look into using the Apache MINA Keep Alive Filter.
https://mina.apache.org/mina-project/apidocs/org/apache/mina/filter/keepalive/KeepAliveFilter.html

2 way server-client UDP in java

I want to write a program in java that handles two way communication between server and client using udp.Most of the sources online specify only one way,that is from client to server.I want the server to be able to send messages to the client as well.
If you cannot use TCP, you can still achieve the same behaviour with UDP.
There are three aspects to consider.
First, that you mentioned: you want to communicate both ways. You can do that by running a sender and a listener thread on both the client and the server.
Second: UDP packets are not guaranteed to arrive. You have to implement an ACK logic in your application layer.
Third: UDP packets are not guaranteed to arrive in order. You have to implement some kind of ordering in your application layer.
UDP is a connection-less protocol on top of IP. This just means that there is no established connection you receive on the other end, you just receive packets of data. To answer back, you have to send a packet "back" to the client.
For this the client needs to be reachable however. This might or might not work through firewalls. Usually firewalls get "punched through" if the client initiates the conversation, but there is no guarantee.
Note also, that UDP packets may arrive out of order, duplicated or not at all. You have to be ready for all. If you send bigger (than MTU) packets, they may have a higher chance of not arriving due to splitting.

Can websocket messages get lost or not?

I'm currently developing a Java WebSocket Client Application and I have to make sure that every message from the server is received by the client. Is it possible that I lose some messages (once they are sent from the server) due to a connection interruption? WebSocket is based on TCP so this shouldn't happen right?
It can happen. TCP guarantees the order of packets, but it does not mean that all packets sent from a server reach a client even when an unrecoverable trouble happens in an underlying network. Imagine someone pulls out your LAN cable or switches off your WiFi access point at the worst timing while your application is communicating with your server. TCP does not overcome such a trouble.
To ensure that every WebSocket message sent from your server reaches your client, you have to implement some kind of SYN/ACK in the application layer.
TCP is a guaranteed protocol - packets will be received in the correct order by the higher application levels at the far end (this is as opposed to UDP which is a send and hope protocol).
Generally speaking TCP should be be used for connections where all the data must arrive correctly at the far end. UDP is used where a missing packet can be dropped without significant issue (e.g. streaming services, NTP updates)
In my game, to counter missed web socket messages, I added an int/long ID for each message. When the client detects that something is wrong in the sequence of IDs it receives, the client will request for new data from the server to be able to recover properly.
TCP has something called Control Flow- which means it provides reliable, ordered, and error-checked delivery.
In other words TCP is a protocol that checks constantly whether the data arrived.
This protocol has different mechanisms to ensure that.
You can see the difference between TCP and UDP (which has no control flow) in the link below.
Difference between tcp and udp

Java Sockets - Messages between many clients

So the problem is I have fifteen clients which need to be able to communicate between each other. My question is how should this be done? Clearly one way is to simply make the clients also servers, but that means 120 unique connections necessary to fully connect the fifteen clients. I'd rather not do this as it seems messy.
Current solution:
Each new connection has the server spin off a separate thread for listening to it. Each client has a separate thread monitoring the channel for incoming information.
Server acts as a message router: Process 1 needs to send a message to Process 2 and sends a message to the server indicating intended recipient, sender, and message.
Upon receiving the message the server passes message to Process 2. The listening thread detects it and passes it to the process.
So on for each message between the clients.
This seems clunky. Is there a better methodology/package to use for this?
A UDP multicast system would work for this but will get complicated for you to do yourself (since you have to worry about synchronization and fault detection/correction yourself as well as nodes droping in and out of the group).
There are various middleware solutions including distributed caches that already address this problem pretty well. Look at Infinispan. If that's too high level and you just want a lower level solution, try JGroups. I only list those because I know they are quick and usable, but there are many others out there.

How to identify a broken socket connection in Java immediately?

I have a typical java client and a server. The client sends some request to the server and waits for the response. The client reads up to say 100 bytes of data from the contained input stream into an array of bytes. It waits for the complete response of 100 bytes to be read within a specified timeout period of say 3 secs. The problem here is to identify if the server went down or crashed while/before writing the response. Basically, we need to identify if the socket was broken or the peer disconnected for some reason. Is there a way to identify this?
How to identify a broken socket connection in Java immediately?
You can't detect it immediately, in Java or any other language. TCP/IP doesn't know, so Java can't know. The only sure way to detect a broken TCP connection is by writing to it and catching IOExceptions, and they won't happen immediately.
The best way to identity the connection is down is to timeout the connection. i.e. you expect a response in a given amount of time and flag if that response does not come as you expect.
When you have a graceful disconnection (.e.g the other end calls close()) the read on the connection will let you know once the buffer has been drained.
However, if there some other type of failure, you might not be notified until the OS times out the connection (e.g. after 3 minutes) and indeed, you may want to keep the connection. e.g. if you pull the network cable out for 10 seconds and put it back in, that doesn't need to be a failure.
EDIT: I don't believe its a good idea to be too aggressive in automatically handling connection/service "failures". This is usually better handled by a planned fix to the system, based on investigation of the true cause. e.g. increased bandwidth, redundant connectivity, faster servers, code fixes.
If connection is broken abnormally, you will receieve IOException when reading; that normally happens quite fast, but there is no guarantees about time - all depends on the OS, network hardware, etc. If remote end gracefully closes the socket, you'll read -1 as next byte.
Assuming everything else works, if the remote peer - the TCP server - was killed then the TCP client will normally receive a TCP RST (reset) and you'll get an IOException in your client application.
However, there are lots of other things that can go wrong besides a process being killed. Basically anything on the network path between the two processes: a cable is yanked, a router dies, a firewall dies, etc. All of this will not immediately be detected.
For the above reasons the general rule is - as pointed out in the answer from EJP - that a broken connection can only be detected by writing to it. This is why it is always recommended that a TCP client and TCP server exchange some type of heartbeat messages at regular intervals. There are different ways to do this. I like best the method where the TCP client will - in the absence of data being received from the TCP server - send a heartbeat message to the server and expect a reply back within a certain time period. This way heartbeat messages will only be sent when really needed.
A sub-optimal approach - if you cannot implement true heartbeating - is to always read with a timeout. Set the timeout on the socket and then catch java.net.SocketTimeoutException. This will allow you to know that no data has been received on socket during x milliseconds.
It should be mentioned that there's one scenario where you don't have to use heartbeating, nor using the socket timeout: if the TCP client and the TCP server communicate over a loopback interface then a broken connection will always be propagated to both the TCP client application and the TCP server application. This is because, in this case, there's really no network infrastructure between the two processes. So if you have an existing application which isn't well-designed with respect to its TCP communication (i.e. it doesn't implement some form of heartbeating or at least reading with a timeout), then as a last resort you may 'fix' the problem by moving the two application onto the same host and let them communicate over the loopback interface.

Categories