Sending messages over an unreliable network in JAVA - java

I need to send a continuous flow of messages (simple TextMessages with a timestamp and x/y coordinates) over a wireless network from a moving computer. There will be a lot of these short messages (like 200 per sec) and unfortunately the network connection is most likely unreliable since the sending device will leave the WLAN area from time to time... When the connection is not available, all upcoming messages should be buffered until the connection is back up again. The order of the transmitted messages does not matter, since they contain a timestamp, but ALL messages must be transferred.
What would be a simple but reliable method for sending these telegrams? Would it be possible to just use a "plain" TCP or UDP socket connection? Would messages be buffered when the connection is temporarily down and send afterwards automatically? Or is the connection loss directly detected and reported, thus I could buffer the messages and try to reconnect periodically on my own? Do libraries like Netty help here?
I also thought about using a broker to broker communication (e.g. ActiveMQ network of brokers) as an alternative. Would the overhead too big here?! Would you suggest another messaging middleware in this case?

TCP is guaranteed delivery (When it's connected that is) - You should check if the connection went down and put messages in a queue while it is retrying the connection. Once it sees that connection is back up dump the queue into the TCP socket.
Also look into TCP Keepalive for recognition of a down connection: http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html

Seems like you could use a message wrapper like Java JMS using a "Assured persistent" reliability mode. I have not done this myself, in the context of text messages, but this idea may lead you to the right answer. Also, there may be an Apache library already written that handles what you need, such as Qpid .

Related

Is java socket support options like "SO_SNDTIMEO“ in C?

Currently I got a situation which client will totally disconnect without sending an EOF(Such as the client is a phone and suddenly change network for wifi to 4G), but my server will still send message to this client. This will take at least 10 mins until server found out the peer is unreachable.
So is there an option in Java to reduce sending timeout, just like the SO_SNDTIMEO in C?
Android docs are pretty much straightforward with what they have: https://developer.android.com/reference/java/net/SocketOptions.html
SO_TIMEOUT is among the list, but it applies to reading operations only. Send operation completion usually doesn't indicate that a packet has been received by the remote host, but rather indicates that the packet has been accepted by kernel's network queue and will be sent "soon".
I won't blame Android team for not having (or at least not advertising) a socket option for sending timeout, because you don't get much information from completion of a send. It's actually up to the application level to detect disconnects. Enhance your protocol, introduce app level keepalives, try non-blocking socket mode to avoid long operations, keep track of what was actually received by a remote host - send is not enough. This will result in a much more robust application.

Apache Mina, How to detect when you're sending messages using an invalid socket to the client side?

I have a server setup using MINA version 2.
I don't have much experience with sockets and tcp.
The problem is if I make a connection to my server, and then unplug my internet and close the connection, (Server doesn't get notification of the connection being closed) the server will forever think that my connection is still active and valid.
The server will continue to send messages to my connection, and doesn't throw any exceptions even though there is nothing on my computer binded to the local port.
How can I test that the connection still exists?
I've tried running MINA logging in debug mode, and logging the
IoSession.isConnected() IoSession.isActive IoSession.isClosing
They always return true, true, false. Also, in debug mode, there was no useful information stating that the connection was lost. It just logged the regular "sent message" stuff, as if there was nothing wrong.
From using Flash actionscript, I have had experiences where flash will throw errors that it's operating on an invalid socket. That leads me to believe that it's saying the socket on the server is no longer valid for the connection. So in other words if flash can detect invalid sockets, a Java server should be able to detect it too correct?
If there is truly no way to detect dead connections, I can always setup a connection keep alive routine where the client is constantly sending an "I'm here" message to the server, and the server closes sessions that havent had an incoming message for a period of seconds.
EDIT: After learning that "sockets" are private and never shared over the network I managed to find better results for my issue and I found this SO thread.
Java socket API: How to tell if a connection has been closed?
Unfortunately
IOException 'Connection reset by peer' Doesn't occur when I write to
the IoSession in MINA.
Edit:
Is there any way at all in Java to detect when an ACK to a TCP packet was not received after sending a packet? An ACK Timeout?
Edit:
Yet apparantly, my computer should send a RST to the server? According to this answer. https://stackoverflow.com/a/1434592/4425643
But that seems like a bad way of port scanning. Is this how port scanning works? Port scanners send data to a port and the victim's service responds with a RST? Sorry I think I need a new question for all this. But it's odd that MINA doesn't throw connection reset by peer when it sends data. So then my computer doesn't send a RST.
The concept of socket or connection in Internet protocols is an illusion. It's a convenient abstraction that is provided to you by the operating system and the TCP stack, but in reality, it's all fake.
Under the hood, everything on the Internet takes the form of individual packets.
From the perspective of a computer sending packets to another computer, there is no built-in way to know whether that computer is actually receiving the packets, unless that computer (or some other computer in between, like a router) tells you that the packets were, or were not, received.
From the perspective of a computer expecting to receive packets from another computer, there is no way to know in advance whether any packets are coming, will ever come, or in what order -- until they actually arrive. And once they arrive, just the fact that you received one packet does not mean you'll receive any more in the future.
That's why I say connections or sockets are an illusion. The way that the operating system determines whether a connection is "alive" or not, is simply by waiting an arbitrary amount of time. After that amount of time -- called a timeout -- if one side of the TCP connection doesn't hear back from the other side, it will just assume that the other end has been disconnected, and arbitrarily set the connection status to "closed", "dead" or "terminated" ("timed out").
So:
Your server has no clue that you've pulled the plug on your Internet connection. It has no way of knowing that.
Your server's TCP stack has been configured a certain way to wait an arbitrary amount of time before "giving up" on the other end if no response is received. If this timeout is set to a very large period of time, it may appear to you that your server is hanging on to connections that are no longer valid. If this bothers you, you should look into ways to decrease the timeout interval.
Analogy: If you are on a phone call with someone, and there's a very real risk of them being hurt or killed, and you are talking to them and getting them to answer, and then the phone suddenly goes dead..... Well, how long do you wait? At what point do you assume the other person has been hurt or killed? If you wait a couple milliseconds, in most cases that's too short of a "timeout", because the other person could just be listening and thinking of how to respond. If you wait for 50 years, the person might be long dead by then. So you have to set a reasonable timeout value that makes sense.
What you want is a KeepAlive, heartbeat, or ping.
As per #allquicatic's answer, there's no completely reliable built-in method to do this in TCP. You'll have to implement a method to explicitly ask the client "Are you still there?" and await an answer for a specified amount of time.
https://en.wikipedia.org/wiki/Keepalive
A keepalive (KA) is a message sent by one device to another to check that the link between the two is operating, or to prevent this link from being broken.
https://en.wikipedia.org/wiki/Heartbeat_(computing)
In computer science, a heartbeat is a periodic signal generated by hardware or software to indicate normal operation or to synchronize other parts of a system.[1] Usually a heartbeat is sent between machines at a regular interval in the order of seconds. If a heartbeat isn't received for a time—usually a few heartbeat intervals—the machine that should have sent the heartbeat is assumed to have failed.[2]
The easiest way to implement one is to periodically send an arbitrary piece of data - e.g. a null command. A properly programmed TCP stack will timeout if an ACK is not received within its specified timeout period, and then you'll get a IOException 'Connection reset by peer'
You may have to manually tune the TCP parameters, or implement your own functionality if you want more fine-grained control than the default timeout.
The TCP framework is not exposed to Java. And Java does not provide a means to edit TCP configuration that exists on the OS level.
This means we cannot use TCP keep alive in Java efficiently because we can't change its default configuration values. Furthermore we can't set the timeout for not receiving an ACK for a message sent. (Learn about TCP to discover that every message sent will wait for an ACK (acknowledgement) from the peer that the message has been successfully delivered.)
Java can only throw exceptions for cases such as a timeout for not completing the TCP handshake in a custom amount of time, a 'Connection Reset by Peer' exception when a RST is received from the peer, and an exception for an ACK timeout after whatever period of time that may be.
To dependably track connection status, you must implement your own Ping/Pong, Keep Alive, or Heartbeat system as #Dog suggested in his answer. (The server must poll the client to see if it's still there, or the client has to continuosly let the server know it's still there.)
For example, configure your client to send a small packet every 10 seconds.
In MINA, you can set a session reader idle timeout, which will send an event when a session reader has been idle for a period of time. You can terminate that connection on delivery of this event. Setting the reader timeout to be a bit longer than the small packet interval will account for random high latency between the client and server. For example, a reader idle timeout of 15 seconds would be lenient in this case.
If your server will rarely experience session idling, and you think you can save bandwidth by polling the client when the session has gone idle, look into using the Apache MINA Keep Alive Filter.
https://mina.apache.org/mina-project/apidocs/org/apache/mina/filter/keepalive/KeepAliveFilter.html

Can websocket messages get lost or not?

I'm currently developing a Java WebSocket Client Application and I have to make sure that every message from the server is received by the client. Is it possible that I lose some messages (once they are sent from the server) due to a connection interruption? WebSocket is based on TCP so this shouldn't happen right?
It can happen. TCP guarantees the order of packets, but it does not mean that all packets sent from a server reach a client even when an unrecoverable trouble happens in an underlying network. Imagine someone pulls out your LAN cable or switches off your WiFi access point at the worst timing while your application is communicating with your server. TCP does not overcome such a trouble.
To ensure that every WebSocket message sent from your server reaches your client, you have to implement some kind of SYN/ACK in the application layer.
TCP is a guaranteed protocol - packets will be received in the correct order by the higher application levels at the far end (this is as opposed to UDP which is a send and hope protocol).
Generally speaking TCP should be be used for connections where all the data must arrive correctly at the far end. UDP is used where a missing packet can be dropped without significant issue (e.g. streaming services, NTP updates)
In my game, to counter missed web socket messages, I added an int/long ID for each message. When the client detects that something is wrong in the sequence of IDs it receives, the client will request for new data from the server to be able to recover properly.
TCP has something called Control Flow- which means it provides reliable, ordered, and error-checked delivery.
In other words TCP is a protocol that checks constantly whether the data arrived.
This protocol has different mechanisms to ensure that.
You can see the difference between TCP and UDP (which has no control flow) in the link below.
Difference between tcp and udp

What is better for instant messenger TCP or UDP?

I need to implement client/server instant messenger using pure sockets in Java lang.
The server should serve large number of clients and I need to decide which sockets should I use - TCP or UDP.
Thanks, Costa.
TCP
Reason:
TCP: "There is absolute guarantee that the data transferred remains intact and arrives in the same order in which it was sent."
UDP: "There is no guarantee that the messages or packets sent would reach at all."
Learn more at: http://www.diffen.com/difference/TCP_vs_UDP
Would you want your chat message possibly lost?
Edit: I missed the part about "large chat program". I think because of the nature of the chat program it needs to be a TCP server, I cannot imagine the actual text content sent by users over a UDP protocol.
The max limit for TCP servers is 65536 connections at the same time. If you really need to go past that number you could create a dispatcher server that would send incoming connects to the appropriate server depending on current server loads.
You could use both. Use TCP for exchanging the actual messages, (so no data lost and streaming large messages, (eg. containing jpegs etc), is possible. Use UDP only for sending short 'connectNow' messages to clients for which there are messages queued. The clients could have states like (NotLoggedIn, TCPconnected, TCPdisconnected, LoggedOut) with various timeouts to control the state transitions as well as the normal message-exchange events. The UDP 'connectNow' message would instruct clients in 'TCPdisconnected' to connect and so move to 'TCPconnected', where they would stay, exchanging messages, until some inactivity timer instructs the client to disconnect for now. This would, of course, be unreliable and so you may wish to repeat the 'connectNow' message every X seconds for N times until the client connects. The client should, in any case, attempt a poll every X minutes, just in case...
It depends whether the user needs to know if the messages have been delivered to the server. UDP packets have no inherent acknowledgement. If the client sends an IM message to the server and it gets lost in transit, neither the client or the server will know about it.
(The short answer is "use TCP" ... but it is worth thinking through the design implications for yourself.)
TCP would give you reliability, which is certainly desirable when during instant messaging -- you would not want messages to be dropped during converstation.
However, if you intend on using group messaging, then you might end up using mulitcast. For such cases, UDP would be the right chioce since UDP can handle point to multipoint. Using TCP for multicast applications would be hard since now the sender would have to keep track of retransmissions/sending rate for multiple receivers. One alternative could be to use TCP for point-to-point chat and use UDP for group messaging.

How to identify a broken socket connection in Java immediately?

I have a typical java client and a server. The client sends some request to the server and waits for the response. The client reads up to say 100 bytes of data from the contained input stream into an array of bytes. It waits for the complete response of 100 bytes to be read within a specified timeout period of say 3 secs. The problem here is to identify if the server went down or crashed while/before writing the response. Basically, we need to identify if the socket was broken or the peer disconnected for some reason. Is there a way to identify this?
How to identify a broken socket connection in Java immediately?
You can't detect it immediately, in Java or any other language. TCP/IP doesn't know, so Java can't know. The only sure way to detect a broken TCP connection is by writing to it and catching IOExceptions, and they won't happen immediately.
The best way to identity the connection is down is to timeout the connection. i.e. you expect a response in a given amount of time and flag if that response does not come as you expect.
When you have a graceful disconnection (.e.g the other end calls close()) the read on the connection will let you know once the buffer has been drained.
However, if there some other type of failure, you might not be notified until the OS times out the connection (e.g. after 3 minutes) and indeed, you may want to keep the connection. e.g. if you pull the network cable out for 10 seconds and put it back in, that doesn't need to be a failure.
EDIT: I don't believe its a good idea to be too aggressive in automatically handling connection/service "failures". This is usually better handled by a planned fix to the system, based on investigation of the true cause. e.g. increased bandwidth, redundant connectivity, faster servers, code fixes.
If connection is broken abnormally, you will receieve IOException when reading; that normally happens quite fast, but there is no guarantees about time - all depends on the OS, network hardware, etc. If remote end gracefully closes the socket, you'll read -1 as next byte.
Assuming everything else works, if the remote peer - the TCP server - was killed then the TCP client will normally receive a TCP RST (reset) and you'll get an IOException in your client application.
However, there are lots of other things that can go wrong besides a process being killed. Basically anything on the network path between the two processes: a cable is yanked, a router dies, a firewall dies, etc. All of this will not immediately be detected.
For the above reasons the general rule is - as pointed out in the answer from EJP - that a broken connection can only be detected by writing to it. This is why it is always recommended that a TCP client and TCP server exchange some type of heartbeat messages at regular intervals. There are different ways to do this. I like best the method where the TCP client will - in the absence of data being received from the TCP server - send a heartbeat message to the server and expect a reply back within a certain time period. This way heartbeat messages will only be sent when really needed.
A sub-optimal approach - if you cannot implement true heartbeating - is to always read with a timeout. Set the timeout on the socket and then catch java.net.SocketTimeoutException. This will allow you to know that no data has been received on socket during x milliseconds.
It should be mentioned that there's one scenario where you don't have to use heartbeating, nor using the socket timeout: if the TCP client and the TCP server communicate over a loopback interface then a broken connection will always be propagated to both the TCP client application and the TCP server application. This is because, in this case, there's really no network infrastructure between the two processes. So if you have an existing application which isn't well-designed with respect to its TCP communication (i.e. it doesn't implement some form of heartbeating or at least reading with a timeout), then as a last resort you may 'fix' the problem by moving the two application onto the same host and let them communicate over the loopback interface.

Categories