I have classic http client/server application where the server serves the clients data at their will but also performs some kind of call-backs to the list of clients' addresses it has. My two questions are :
1- How would the server know if a client is down (the client did not disconnect but the connection got suddenly interrupted) ?
2- Is there a way to know from the server-side if the process at client-side listening on the call-back port is still up (i.e. client call-back socket is still open) ?
1- How would the server know if a client is down (the client did not disconnect but the connection got suddenly interrupted) ?
Option #1: direct communication
Client tells server "I'm alive" at a periodic interval. You could make your client to ping your server at a configurable interval, and if the server does not receive the signal for a certain time, it'll mark the client as down. Client could even tell server more info(e.g. It's status) in each heartbeat if necessary, this is also the way used in many distributed systems(e.g. Hadoop/Hbase).
Option #2: distributed coordination service
You could treat all clients connected to a server as a group, and use a 3rd party distributed coordination service like Zookeeper to facilitate the membership management. Client registers itself to Zookeeper as a new member of the group right after booting up, and leaves the group if it's down. Zookeeper notifies the server whenever the membership changes.
2- Is there a way to know from the server-side if the process at client-side listening on the call-back port is still up (i.e. client call-back socket is still open) ?
I think this can only be done by the way Option #1 listed above. It could be either the way clients tell server "My callback port is OK" at a fixed interval, or the server asks clients "Are your callback port OK?" and wait its response at a fixed interval
You would have to establish some sort of protocol; and simply spoken: the server keeps track of "messages" that it tried to sent to clients.
If that "send" is acknowledged, fine; if not: then the server might do a limited number of retries; and then regard that client as "gone"; and then drop any other messages for that client.
1- How would the server know if a client is down (the client did not disconnect but the connection got suddenly interrupted) ?
A write to the client will fail.
2- Is there a way to know from the server-side if the process at client-side listening on the call-back port is still up (i.e. client call-back socket is still open
A write to the client will fail.
The write won't necessarily fail immediately, due to TCP buffering, but the write will eventually provoke retries and retry timeouts that will cause a subsequent read or write to fail.
In Java the failure will manifest itself as an IOException: connection reset.
Related
I have 3 devices android (1 server and 2 clients) connected via TCP.
Knowing that sometimes the server uses the open socket to contact the client, how can I manage the disconnections?
Indeed, in case of a short disconnection, only the client can restore the connection. The server can't send a message to the client anymore and has to wait for the client's call. Right?
So, I think I have 2 options:
Force the client to maintain the contact with the server, sending an "are you there" message every second to him. Thereby, if a disconnection occurs, it will restore the connection as soon as possible.
Use a peer to peer structure. Both (client and server) can "call" the other when he wants.
While the first solution seems heavier for the network, and the second more complicated to set up and maintain. What do you recommend?
How will the server know of client connection loss? does this trigger an event? is this possible to store code (server side) so that it can execute before the connection loss happen?
This connection loss can happen if:
being idle for too long.
client side terminated.
etc.
This i am asking in particular to Jsp and php.
It depends on the protocol you're talking about, but a "connection" is typically established through a three-way handshake, which causes both parties to simply agree that they're "connected" now. This means both parties remember in a table that there's an open connection to IP a.b.c.d on port x and what context this "connection" is associated with. All incoming data from that "connection" is then passed to the associated context.
That's all there is to it, there's no real "physical" connection; it's just an agreed upon state between two parties.
Depending on the protocol, a connection can be formally terminated with an appropriate packet. One party sends this packet to the other, telling it that the "connection" is terminated; both parties remove the table entries and that's that.
If the connection is interrupted without this packet being sent, neither party will know about it. Only the next time one party tries to send data to the other will this problem become apparent.
Depending on the protocol a connection may automatically be considered stale and terminated if no data was received for a certain amount of time. In this case, a dead connection will be noticed sooner, but requires a constant back and forth of some sort between both parties.
So in short: yes, there is a server event that can be triggered, but it is not guaranteed to be triggered.
When you close a socket, the socket on the other end is notified. However, if the connection is lost ungracefully (e.g. a network cable is unplugged, or a computer loses power), then you probably will not find out.
To deal with this, you can send periodic messages just to check the connection. If the send fails, then the connection has been interrupted. Make sure you set up your sockets to only wait for a reasonable amount of time, though.
If you are talking about a typical client server architecture, server shouldn't bother about the connection to the client. Only client should bother about connection to server. Client should take measures to avoid the connection being dropped like periodically sending a keep alive message or similar to avoid timeout.
Why does server need to bother about connection loss/termination.
Server job is to serve the request which comes from the client. That's it. If client doesn't receive the data it expected from Server then it can take appropriate action. If connection gets disconnected when server is doing some processing for giving data to client; then also server can't do much as http request is initiated by client.
So client can make a new request if for some reason it didn't get response.
I'm working on a multi-client application that will connect with a server in the LAN.
Every client can send a command that changes the status of the server.
This 'ServerStatus', as I will call it, is an object with some values.
Now if the ServerStatus changes, all clients should know about it immediatly.
My idea was to work like this:
Server sends a multicast to all listening clients with a versionNumber of the ServerStatus every second. So if a new client joins the multicast group, he will see if his versionNumber is the same.
If not, the client will ask the current version of ServerStatus via UDP.
When a client sends a command that changes the ServerStatus,
the server will send his current (and new) ServerStatus to the same multicast group,
while in another thread, the versionnumber of ServerStatus is still shared every second.
Do you guys think this is a good way to deal with this?
Or will this cause too much problems,... etc
What happens if the new ServerStatus fails to reach the clients? In my opinion you should not use UDP when sending the new status to the clients, but a reliable protocol. So if you intend to use multicast on this you will have to get a reliable multicast protocol.
On the other hand, you may prefer client synchronization with the server:
Every time a client enters the network it asks the server its statusid (if not the same, the server sends him ServerStatus) and the client also registers for new status change events. (TCP)
When leaving, the client could send a UNREGISTER message (UDP).
Each time ServerStatus changes, the server sends the new ServerStatus to each registered client. On receiving the new Serverstatus the client would send an ack-like to the server.(TCP)
If the ack was not received by the server, the client in question would be unregistered (because it would mean the client had left the network without unregistering - by error).
hope this helps..
Basically, your idea sounds good to me
I would suggest you dig more into principals of "group communication", and look at frameworks such as jGroups, I know that JBoss Cache uses it to distrubte data among its nodes.
Maybe for reliability clients should also query the server once in X seconds, to see they they ave the correct version number,
or at least perform this when they are started / recover from crash.
I am writing this game in Java and have problems with networking architecture.
I decided I will UDP packets. I am just at the beginning, but the problem I am facing is that it seems to be that server have to respond from exactly same IP/Port to client (which is behind router which uses NAT) as client connected that server.
For example I have client A behind router. Client A has IP (local) 192.168.8.100 and it connects server B from port 1234. Server is on 11.11.11.11:2345.
When client A connects to server B it uses 192.168.8.100:1234 but router converts that to (for example) 22.22.22.22:6789.
Now, when server wants to send packets to that client it has to be from 11.11.11.11:2345.
I would like to send data from another port like 11.11.11.11:2222, but this does not seem to work, at least not with my router.
I want to use different port because I want to have two threads one for listening and one for sending data, and each thread would have it's own DatagramSocket. But, as i said once client A connects to server on port 2345, I can not send data from port 2222.
Does anyone know how is this handled? I am doing it in Java, but it's not really a language specific problem.
UPDATE
After #Perception commented I have some more questions regarding his comments:
OK, so if I understand this correctly, if I have server which is hosting 1000 games, each with 2 players, all sending/receiving will have to be done through the same DatagramSocket.
As I understand DatagramSocket is thread safe so I guess I can have one thread doing:
datagramSocket.receive();
while at the same time second thread is doing
datagramSocket.send(.....);
Correct?
Also, two threads can send data at the same time through the same DatagramSocket? Is sending in any way serialized, meaning that second send() starts only after previous send() is finished or is data being sent at the same time?
gorann, I'm not sure if I'm understanding you correctly, but it sounds like you're trying to control the port on which the server communicates with the client. There's no way to control this, and for good reasons.
This is one of the trickier differences between TCP and UDP.
When a new TCP session is initiated, the server side call to accept() gives you a new socket and the OS handles multiplexing the various sessions for you. With UDP, you need to handle the multiplexing yourself. But you need to do so in a way that works with NATs and other firewalls.
The way NAT works is that when it sees an outgoing packet, it creates a temporary rule allow packets to return along the same port pair. Data returning from a port that the client has not yet sent to will likely be blocked.
This gives you two choices:
You could do all of your communication through a single port. This is not a bad option, it just means that you need a way to identify client sessions and route them to the appropriate thread.
You could create a separate port and instruct the client to send to that one instead. Have the server listen on a fixed port. The client sends a message to there, the server then sets up a new session port and sends that number back to the client using the server's listen port. The client then sends a message to the session port, which causes the NAT to open up that port and allow return traffic. Now the client and server thread have their own private port pair.
Option 1 is a bit more work because it requires data to be exchanged between threads, but it scales up better. Option 1 is easier and more CPU efficient because each session thread can be independent, but there are a finite number of ports available.
Either way, I recommend that you have the client include a semi-unique session id in each packet so that the server has more than just the client address and port number to verify who belongs to each session.
I'm building a Client / Server app that has some very specific needs. There are 2 kinds of servers: the first kind provide most of the remote procedures and clients connect to these directly, while the second kind is a single server that should keep track of what users are active (clients) and how many servers of the first kind are active when a method is called.
The main thing is that the monitor should ONLY connect to the servers and not the clients directly. My first idea was to implement a simple login/logout rmi method when a client connects/ disconnects and keep track of those in a list but the main problem is when a client or server end abnormally.
For example, if a client goes offline abruptly the server should somehow be notified and update the list accordingly, while if a server goes out all of the clients connected to it should be marked as not active in the control server.
Any ideas of how to implement this functionality would be appreciated.
I would suggest implementing a "session" approach to the problem, where the servers and the clients are sending a "heartbeat" method call to the monitoring server every several minutes(may seconds or hours depending on your needs). If the monitoring server doesn't receive a "heartbeat" from the servers or clients in a certain amount of time, then you consider them gone (terminated abnormally) and notify accordingly.
Zookeeper may be something to look at. Have each clientserver register an ephemeral node for itself, and for each client that is connected to it. When the clientserver goes down, the ephemeral nodes will die. The monitor server just needs to watch zookeeper to see who is up and connected.
For detecting clients going down, you will need some kind of hearbeating so that the clientserver can detect when a client dies. If the client can talk to zookeeper directly, then simply have the client register an ephemeral node in zookeeper as well, and the clientserver can watch the clients ephemeral node, and know when the client is down.