best way to keep clients updated in LAN - Java networking - java

I'm working on a multi-client application that will connect with a server in the LAN.
Every client can send a command that changes the status of the server.
This 'ServerStatus', as I will call it, is an object with some values.
Now if the ServerStatus changes, all clients should know about it immediatly.
My idea was to work like this:
Server sends a multicast to all listening clients with a versionNumber of the ServerStatus every second. So if a new client joins the multicast group, he will see if his versionNumber is the same.
If not, the client will ask the current version of ServerStatus via UDP.
When a client sends a command that changes the ServerStatus,
the server will send his current (and new) ServerStatus to the same multicast group,
while in another thread, the versionnumber of ServerStatus is still shared every second.
Do you guys think this is a good way to deal with this?
Or will this cause too much problems,... etc

What happens if the new ServerStatus fails to reach the clients? In my opinion you should not use UDP when sending the new status to the clients, but a reliable protocol. So if you intend to use multicast on this you will have to get a reliable multicast protocol.
On the other hand, you may prefer client synchronization with the server:
Every time a client enters the network it asks the server its statusid (if not the same, the server sends him ServerStatus) and the client also registers for new status change events. (TCP)
When leaving, the client could send a UNREGISTER message (UDP).
Each time ServerStatus changes, the server sends the new ServerStatus to each registered client. On receiving the new Serverstatus the client would send an ack-like to the server.(TCP)
If the ack was not received by the server, the client in question would be unregistered (because it would mean the client had left the network without unregistering - by error).
hope this helps..

Basically, your idea sounds good to me
I would suggest you dig more into principals of "group communication", and look at frameworks such as jGroups, I know that JBoss Cache uses it to distrubte data among its nodes.
Maybe for reliability clients should also query the server once in X seconds, to see they they ave the correct version number,
or at least perform this when they are started / recover from crash.

Related

How could a server check the availability of a client?

I have classic http client/server application where the server serves the clients data at their will but also performs some kind of call-backs to the list of clients' addresses it has. My two questions are :
1- How would the server know if a client is down (the client did not disconnect but the connection got suddenly interrupted) ?
2- Is there a way to know from the server-side if the process at client-side listening on the call-back port is still up (i.e. client call-back socket is still open) ?
1- How would the server know if a client is down (the client did not disconnect but the connection got suddenly interrupted) ?
Option #1: direct communication
Client tells server "I'm alive" at a periodic interval. You could make your client to ping your server at a configurable interval, and if the server does not receive the signal for a certain time, it'll mark the client as down. Client could even tell server more info(e.g. It's status) in each heartbeat if necessary, this is also the way used in many distributed systems(e.g. Hadoop/Hbase).
Option #2: distributed coordination service
You could treat all clients connected to a server as a group, and use a 3rd party distributed coordination service like Zookeeper to facilitate the membership management. Client registers itself to Zookeeper as a new member of the group right after booting up, and leaves the group if it's down. Zookeeper notifies the server whenever the membership changes.
2- Is there a way to know from the server-side if the process at client-side listening on the call-back port is still up (i.e. client call-back socket is still open) ?
I think this can only be done by the way Option #1 listed above. It could be either the way clients tell server "My callback port is OK" at a fixed interval, or the server asks clients "Are your callback port OK?" and wait its response at a fixed interval
You would have to establish some sort of protocol; and simply spoken: the server keeps track of "messages" that it tried to sent to clients.
If that "send" is acknowledged, fine; if not: then the server might do a limited number of retries; and then regard that client as "gone"; and then drop any other messages for that client.
1- How would the server know if a client is down (the client did not disconnect but the connection got suddenly interrupted) ?
A write to the client will fail.
2- Is there a way to know from the server-side if the process at client-side listening on the call-back port is still up (i.e. client call-back socket is still open
A write to the client will fail.
The write won't necessarily fail immediately, due to TCP buffering, but the write will eventually provoke retries and retry timeouts that will cause a subsequent read or write to fail.
In Java the failure will manifest itself as an IOException: connection reset.

Android : How can the server call the client back after a disconnection?

I have 3 devices android (1 server and 2 clients) connected via TCP.
Knowing that sometimes the server uses the open socket to contact the client, how can I manage the disconnections?
Indeed, in case of a short disconnection, only the client can restore the connection. The server can't send a message to the client anymore and has to wait for the client's call. Right?
So, I think I have 2 options:
Force the client to maintain the contact with the server, sending an "are you there" message every second to him. Thereby, if a disconnection occurs, it will restore the connection as soon as possible.
Use a peer to peer structure. Both (client and server) can "call" the other when he wants.
While the first solution seems heavier for the network, and the second more complicated to set up and maintain. What do you recommend?

Can websocket messages get lost or not?

I'm currently developing a Java WebSocket Client Application and I have to make sure that every message from the server is received by the client. Is it possible that I lose some messages (once they are sent from the server) due to a connection interruption? WebSocket is based on TCP so this shouldn't happen right?
It can happen. TCP guarantees the order of packets, but it does not mean that all packets sent from a server reach a client even when an unrecoverable trouble happens in an underlying network. Imagine someone pulls out your LAN cable or switches off your WiFi access point at the worst timing while your application is communicating with your server. TCP does not overcome such a trouble.
To ensure that every WebSocket message sent from your server reaches your client, you have to implement some kind of SYN/ACK in the application layer.
TCP is a guaranteed protocol - packets will be received in the correct order by the higher application levels at the far end (this is as opposed to UDP which is a send and hope protocol).
Generally speaking TCP should be be used for connections where all the data must arrive correctly at the far end. UDP is used where a missing packet can be dropped without significant issue (e.g. streaming services, NTP updates)
In my game, to counter missed web socket messages, I added an int/long ID for each message. When the client detects that something is wrong in the sequence of IDs it receives, the client will request for new data from the server to be able to recover properly.
TCP has something called Control Flow- which means it provides reliable, ordered, and error-checked delivery.
In other words TCP is a protocol that checks constantly whether the data arrived.
This protocol has different mechanisms to ensure that.
You can see the difference between TCP and UDP (which has no control flow) in the link below.
Difference between tcp and udp

Server UDP and port binding

I am writing this game in Java and have problems with networking architecture.
I decided I will UDP packets. I am just at the beginning, but the problem I am facing is that it seems to be that server have to respond from exactly same IP/Port to client (which is behind router which uses NAT) as client connected that server.
For example I have client A behind router. Client A has IP (local) 192.168.8.100 and it connects server B from port 1234. Server is on 11.11.11.11:2345.
When client A connects to server B it uses 192.168.8.100:1234 but router converts that to (for example) 22.22.22.22:6789.
Now, when server wants to send packets to that client it has to be from 11.11.11.11:2345.
I would like to send data from another port like 11.11.11.11:2222, but this does not seem to work, at least not with my router.
I want to use different port because I want to have two threads one for listening and one for sending data, and each thread would have it's own DatagramSocket. But, as i said once client A connects to server on port 2345, I can not send data from port 2222.
Does anyone know how is this handled? I am doing it in Java, but it's not really a language specific problem.
UPDATE
After #Perception commented I have some more questions regarding his comments:
OK, so if I understand this correctly, if I have server which is hosting 1000 games, each with 2 players, all sending/receiving will have to be done through the same DatagramSocket.
As I understand DatagramSocket is thread safe so I guess I can have one thread doing:
datagramSocket.receive();
while at the same time second thread is doing
datagramSocket.send(.....);
Correct?
Also, two threads can send data at the same time through the same DatagramSocket? Is sending in any way serialized, meaning that second send() starts only after previous send() is finished or is data being sent at the same time?
gorann, I'm not sure if I'm understanding you correctly, but it sounds like you're trying to control the port on which the server communicates with the client. There's no way to control this, and for good reasons.
This is one of the trickier differences between TCP and UDP.
When a new TCP session is initiated, the server side call to accept() gives you a new socket and the OS handles multiplexing the various sessions for you. With UDP, you need to handle the multiplexing yourself. But you need to do so in a way that works with NATs and other firewalls.
The way NAT works is that when it sees an outgoing packet, it creates a temporary rule allow packets to return along the same port pair. Data returning from a port that the client has not yet sent to will likely be blocked.
This gives you two choices:
You could do all of your communication through a single port. This is not a bad option, it just means that you need a way to identify client sessions and route them to the appropriate thread.
You could create a separate port and instruct the client to send to that one instead. Have the server listen on a fixed port. The client sends a message to there, the server then sets up a new session port and sends that number back to the client using the server's listen port. The client then sends a message to the session port, which causes the NAT to open up that port and allow return traffic. Now the client and server thread have their own private port pair.
Option 1 is a bit more work because it requires data to be exchanged between threads, but it scales up better. Option 1 is easier and more CPU efficient because each session thread can be independent, but there are a finite number of ports available.
Either way, I recommend that you have the client include a semi-unique session id in each packet so that the server has more than just the client address and port number to verify who belongs to each session.

client failure detection in client-server systems (distributed)

Assume a distributed communication system where client and server communicate via a stateless channel.
The client sends requests to the server and the server does processing and keeps internal records for each client.
Server sends back notifications to the clients as various events happen to the system, as needed.
The notification mechanism depends on the internal records.
My question is, what is the standard appoach in distributed computing to handle the client failures?
I.e. in this context, assume that the client process crashes or simply restarts.
The server still has the records for the client but now client and server are of sync.
As a result client will get notifications according to records created before restart. This is undesirable.
What is a standardized way to detect the client failures? E.g. client has restarted and previous records must be erased?
I thought of periodic callbacks to clients and if a client is not reachable, erase its records but I am not sure if this is a good idea. [EDIT] I thought of callbacks because, the period events send back to the client can be in very large intervals and so the client failure would not be noticable soon
Can anyone help on this? The context of my application domain is web services.
Thank you!
The standard approach varies from system to system depending to the architecture and domain. How the server finds out that the client is down? I think you don't need callbacks, since you send the notifications and can detect that the client is unreachable. For example:
send a notification to the client;
if success, goto 1;
else erase all the notifications in the queue for the client, set a flag to not collect events for the client.
When a client is connected:
unset the flag;
start sending notifications
Or even a simpler approach:
erase the notification queue for the client when it connects before initializing the conversation;
run a low-priority thread to erase all the notifications for all the clients which are older then X, to clean notifications for the client which will never come back.
Update after the original author comments
It strongly depends on how things are organized in your system. Assuming:
The server starts a thread (let's call it "agent") to serve a client, a thread per client.
The agent exits when the clients shuts down the session properly or goes down.
there is a private (which is not shared among agents/clients) record set for each client
there is a shared list of current clients which is used by another component (not an ordinary agent, let's call it "dispatcher") to distribute records for clients.
solution:
1. the server starts an agent and registers the client just connected to list of clients. The dispatcher gets notified that a new client arrived.
2. the agent consumes the records until client is connected. On client's shutdown and/or failure the agents unregisters the client and cleans the record set.
If things in your system aren't organized in the way described above, please provide some details.

Categories