How will the server know of client connection loss? does this trigger an event? is this possible to store code (server side) so that it can execute before the connection loss happen?
This connection loss can happen if:
being idle for too long.
client side terminated.
etc.
This i am asking in particular to Jsp and php.
It depends on the protocol you're talking about, but a "connection" is typically established through a three-way handshake, which causes both parties to simply agree that they're "connected" now. This means both parties remember in a table that there's an open connection to IP a.b.c.d on port x and what context this "connection" is associated with. All incoming data from that "connection" is then passed to the associated context.
That's all there is to it, there's no real "physical" connection; it's just an agreed upon state between two parties.
Depending on the protocol, a connection can be formally terminated with an appropriate packet. One party sends this packet to the other, telling it that the "connection" is terminated; both parties remove the table entries and that's that.
If the connection is interrupted without this packet being sent, neither party will know about it. Only the next time one party tries to send data to the other will this problem become apparent.
Depending on the protocol a connection may automatically be considered stale and terminated if no data was received for a certain amount of time. In this case, a dead connection will be noticed sooner, but requires a constant back and forth of some sort between both parties.
So in short: yes, there is a server event that can be triggered, but it is not guaranteed to be triggered.
When you close a socket, the socket on the other end is notified. However, if the connection is lost ungracefully (e.g. a network cable is unplugged, or a computer loses power), then you probably will not find out.
To deal with this, you can send periodic messages just to check the connection. If the send fails, then the connection has been interrupted. Make sure you set up your sockets to only wait for a reasonable amount of time, though.
If you are talking about a typical client server architecture, server shouldn't bother about the connection to the client. Only client should bother about connection to server. Client should take measures to avoid the connection being dropped like periodically sending a keep alive message or similar to avoid timeout.
Why does server need to bother about connection loss/termination.
Server job is to serve the request which comes from the client. That's it. If client doesn't receive the data it expected from Server then it can take appropriate action. If connection gets disconnected when server is doing some processing for giving data to client; then also server can't do much as http request is initiated by client.
So client can make a new request if for some reason it didn't get response.
Related
I have classic http client/server application where the server serves the clients data at their will but also performs some kind of call-backs to the list of clients' addresses it has. My two questions are :
1- How would the server know if a client is down (the client did not disconnect but the connection got suddenly interrupted) ?
2- Is there a way to know from the server-side if the process at client-side listening on the call-back port is still up (i.e. client call-back socket is still open) ?
1- How would the server know if a client is down (the client did not disconnect but the connection got suddenly interrupted) ?
Option #1: direct communication
Client tells server "I'm alive" at a periodic interval. You could make your client to ping your server at a configurable interval, and if the server does not receive the signal for a certain time, it'll mark the client as down. Client could even tell server more info(e.g. It's status) in each heartbeat if necessary, this is also the way used in many distributed systems(e.g. Hadoop/Hbase).
Option #2: distributed coordination service
You could treat all clients connected to a server as a group, and use a 3rd party distributed coordination service like Zookeeper to facilitate the membership management. Client registers itself to Zookeeper as a new member of the group right after booting up, and leaves the group if it's down. Zookeeper notifies the server whenever the membership changes.
2- Is there a way to know from the server-side if the process at client-side listening on the call-back port is still up (i.e. client call-back socket is still open) ?
I think this can only be done by the way Option #1 listed above. It could be either the way clients tell server "My callback port is OK" at a fixed interval, or the server asks clients "Are your callback port OK?" and wait its response at a fixed interval
You would have to establish some sort of protocol; and simply spoken: the server keeps track of "messages" that it tried to sent to clients.
If that "send" is acknowledged, fine; if not: then the server might do a limited number of retries; and then regard that client as "gone"; and then drop any other messages for that client.
1- How would the server know if a client is down (the client did not disconnect but the connection got suddenly interrupted) ?
A write to the client will fail.
2- Is there a way to know from the server-side if the process at client-side listening on the call-back port is still up (i.e. client call-back socket is still open
A write to the client will fail.
The write won't necessarily fail immediately, due to TCP buffering, but the write will eventually provoke retries and retry timeouts that will cause a subsequent read or write to fail.
In Java the failure will manifest itself as an IOException: connection reset.
I have a server setup using MINA version 2.
I don't have much experience with sockets and tcp.
The problem is if I make a connection to my server, and then unplug my internet and close the connection, (Server doesn't get notification of the connection being closed) the server will forever think that my connection is still active and valid.
The server will continue to send messages to my connection, and doesn't throw any exceptions even though there is nothing on my computer binded to the local port.
How can I test that the connection still exists?
I've tried running MINA logging in debug mode, and logging the
IoSession.isConnected() IoSession.isActive IoSession.isClosing
They always return true, true, false. Also, in debug mode, there was no useful information stating that the connection was lost. It just logged the regular "sent message" stuff, as if there was nothing wrong.
From using Flash actionscript, I have had experiences where flash will throw errors that it's operating on an invalid socket. That leads me to believe that it's saying the socket on the server is no longer valid for the connection. So in other words if flash can detect invalid sockets, a Java server should be able to detect it too correct?
If there is truly no way to detect dead connections, I can always setup a connection keep alive routine where the client is constantly sending an "I'm here" message to the server, and the server closes sessions that havent had an incoming message for a period of seconds.
EDIT: After learning that "sockets" are private and never shared over the network I managed to find better results for my issue and I found this SO thread.
Java socket API: How to tell if a connection has been closed?
Unfortunately
IOException 'Connection reset by peer' Doesn't occur when I write to
the IoSession in MINA.
Edit:
Is there any way at all in Java to detect when an ACK to a TCP packet was not received after sending a packet? An ACK Timeout?
Edit:
Yet apparantly, my computer should send a RST to the server? According to this answer. https://stackoverflow.com/a/1434592/4425643
But that seems like a bad way of port scanning. Is this how port scanning works? Port scanners send data to a port and the victim's service responds with a RST? Sorry I think I need a new question for all this. But it's odd that MINA doesn't throw connection reset by peer when it sends data. So then my computer doesn't send a RST.
The concept of socket or connection in Internet protocols is an illusion. It's a convenient abstraction that is provided to you by the operating system and the TCP stack, but in reality, it's all fake.
Under the hood, everything on the Internet takes the form of individual packets.
From the perspective of a computer sending packets to another computer, there is no built-in way to know whether that computer is actually receiving the packets, unless that computer (or some other computer in between, like a router) tells you that the packets were, or were not, received.
From the perspective of a computer expecting to receive packets from another computer, there is no way to know in advance whether any packets are coming, will ever come, or in what order -- until they actually arrive. And once they arrive, just the fact that you received one packet does not mean you'll receive any more in the future.
That's why I say connections or sockets are an illusion. The way that the operating system determines whether a connection is "alive" or not, is simply by waiting an arbitrary amount of time. After that amount of time -- called a timeout -- if one side of the TCP connection doesn't hear back from the other side, it will just assume that the other end has been disconnected, and arbitrarily set the connection status to "closed", "dead" or "terminated" ("timed out").
So:
Your server has no clue that you've pulled the plug on your Internet connection. It has no way of knowing that.
Your server's TCP stack has been configured a certain way to wait an arbitrary amount of time before "giving up" on the other end if no response is received. If this timeout is set to a very large period of time, it may appear to you that your server is hanging on to connections that are no longer valid. If this bothers you, you should look into ways to decrease the timeout interval.
Analogy: If you are on a phone call with someone, and there's a very real risk of them being hurt or killed, and you are talking to them and getting them to answer, and then the phone suddenly goes dead..... Well, how long do you wait? At what point do you assume the other person has been hurt or killed? If you wait a couple milliseconds, in most cases that's too short of a "timeout", because the other person could just be listening and thinking of how to respond. If you wait for 50 years, the person might be long dead by then. So you have to set a reasonable timeout value that makes sense.
What you want is a KeepAlive, heartbeat, or ping.
As per #allquicatic's answer, there's no completely reliable built-in method to do this in TCP. You'll have to implement a method to explicitly ask the client "Are you still there?" and await an answer for a specified amount of time.
https://en.wikipedia.org/wiki/Keepalive
A keepalive (KA) is a message sent by one device to another to check that the link between the two is operating, or to prevent this link from being broken.
https://en.wikipedia.org/wiki/Heartbeat_(computing)
In computer science, a heartbeat is a periodic signal generated by hardware or software to indicate normal operation or to synchronize other parts of a system.[1] Usually a heartbeat is sent between machines at a regular interval in the order of seconds. If a heartbeat isn't received for a time—usually a few heartbeat intervals—the machine that should have sent the heartbeat is assumed to have failed.[2]
The easiest way to implement one is to periodically send an arbitrary piece of data - e.g. a null command. A properly programmed TCP stack will timeout if an ACK is not received within its specified timeout period, and then you'll get a IOException 'Connection reset by peer'
You may have to manually tune the TCP parameters, or implement your own functionality if you want more fine-grained control than the default timeout.
The TCP framework is not exposed to Java. And Java does not provide a means to edit TCP configuration that exists on the OS level.
This means we cannot use TCP keep alive in Java efficiently because we can't change its default configuration values. Furthermore we can't set the timeout for not receiving an ACK for a message sent. (Learn about TCP to discover that every message sent will wait for an ACK (acknowledgement) from the peer that the message has been successfully delivered.)
Java can only throw exceptions for cases such as a timeout for not completing the TCP handshake in a custom amount of time, a 'Connection Reset by Peer' exception when a RST is received from the peer, and an exception for an ACK timeout after whatever period of time that may be.
To dependably track connection status, you must implement your own Ping/Pong, Keep Alive, or Heartbeat system as #Dog suggested in his answer. (The server must poll the client to see if it's still there, or the client has to continuosly let the server know it's still there.)
For example, configure your client to send a small packet every 10 seconds.
In MINA, you can set a session reader idle timeout, which will send an event when a session reader has been idle for a period of time. You can terminate that connection on delivery of this event. Setting the reader timeout to be a bit longer than the small packet interval will account for random high latency between the client and server. For example, a reader idle timeout of 15 seconds would be lenient in this case.
If your server will rarely experience session idling, and you think you can save bandwidth by polling the client when the session has gone idle, look into using the Apache MINA Keep Alive Filter.
https://mina.apache.org/mina-project/apidocs/org/apache/mina/filter/keepalive/KeepAliveFilter.html
I'm using Netty as part of a UDP server which opens a number of persistent connections with various clients on a single server channel. I need to perform some cleanup when a client terminates the connection. Specifically, I need to know when the connection is terminated and from which IP address the terminating client is located at so that I can cleanup the data.
Perhaps I'm misunderstanding the Netty model, but I'm unsure how to detect this. I normally distinguish between various clients by checking the sender's IP address from the DatagramPacket instances, perhaps that is wrong and I should be using multiple channels or something like that? Either way, I'm looking for some way to handle a closed connection.
You're misunderstanding UDP, not Netty. There is no such thing as a "persistent connection" in UDP, although there can be application-managed sessions, and UDP provides no information about whether the other endpoint is still active. Ideally, you'll have a "close" message, but you'll need to have a timeout or other mechanism to identify dead connections and clean them up (for crashed clients, etc.). This will require some sort of garbage-collection process, maybe a background thread that checks a "last heard from" timestamp and closes inactive sessions.
I have a typical java client and a server. The client sends some request to the server and waits for the response. The client reads up to say 100 bytes of data from the contained input stream into an array of bytes. It waits for the complete response of 100 bytes to be read within a specified timeout period of say 3 secs. The problem here is to identify if the server went down or crashed while/before writing the response. Basically, we need to identify if the socket was broken or the peer disconnected for some reason. Is there a way to identify this?
How to identify a broken socket connection in Java immediately?
You can't detect it immediately, in Java or any other language. TCP/IP doesn't know, so Java can't know. The only sure way to detect a broken TCP connection is by writing to it and catching IOExceptions, and they won't happen immediately.
The best way to identity the connection is down is to timeout the connection. i.e. you expect a response in a given amount of time and flag if that response does not come as you expect.
When you have a graceful disconnection (.e.g the other end calls close()) the read on the connection will let you know once the buffer has been drained.
However, if there some other type of failure, you might not be notified until the OS times out the connection (e.g. after 3 minutes) and indeed, you may want to keep the connection. e.g. if you pull the network cable out for 10 seconds and put it back in, that doesn't need to be a failure.
EDIT: I don't believe its a good idea to be too aggressive in automatically handling connection/service "failures". This is usually better handled by a planned fix to the system, based on investigation of the true cause. e.g. increased bandwidth, redundant connectivity, faster servers, code fixes.
If connection is broken abnormally, you will receieve IOException when reading; that normally happens quite fast, but there is no guarantees about time - all depends on the OS, network hardware, etc. If remote end gracefully closes the socket, you'll read -1 as next byte.
Assuming everything else works, if the remote peer - the TCP server - was killed then the TCP client will normally receive a TCP RST (reset) and you'll get an IOException in your client application.
However, there are lots of other things that can go wrong besides a process being killed. Basically anything on the network path between the two processes: a cable is yanked, a router dies, a firewall dies, etc. All of this will not immediately be detected.
For the above reasons the general rule is - as pointed out in the answer from EJP - that a broken connection can only be detected by writing to it. This is why it is always recommended that a TCP client and TCP server exchange some type of heartbeat messages at regular intervals. There are different ways to do this. I like best the method where the TCP client will - in the absence of data being received from the TCP server - send a heartbeat message to the server and expect a reply back within a certain time period. This way heartbeat messages will only be sent when really needed.
A sub-optimal approach - if you cannot implement true heartbeating - is to always read with a timeout. Set the timeout on the socket and then catch java.net.SocketTimeoutException. This will allow you to know that no data has been received on socket during x milliseconds.
It should be mentioned that there's one scenario where you don't have to use heartbeating, nor using the socket timeout: if the TCP client and the TCP server communicate over a loopback interface then a broken connection will always be propagated to both the TCP client application and the TCP server application. This is because, in this case, there's really no network infrastructure between the two processes. So if you have an existing application which isn't well-designed with respect to its TCP communication (i.e. it doesn't implement some form of heartbeating or at least reading with a timeout), then as a last resort you may 'fix' the problem by moving the two application onto the same host and let them communicate over the loopback interface.
I have a typical client server communication - Client sends data to the server, server processes that, and returns data to the client. The problem is that the process operation can take quite some time - order of magnitude - minutes. There are a few approaches that could be used to solve this.
Establish a connection, and keep it alive, until the operation is finished and the client receives the response.
Establish connection, send data, close the connection. Now the processing takes place and once it is finished the server could establish a connection to the client to send the data.
Establish a connection, send data, close the connection. Processing takes place. client asks server, every n minutes/seconds if the operation is finished. If the processing is finished the client fetches the data.
I was wondering which approach would be the best way to use. Is there maybe some "de facto" standard for solving this problem? How "expensive" is opening a socket in Java? Solution 1. seems pretty nasty to me, but 2. and 3. could do. The problem with solution 2. is that the server needs to know on which port the client is listening, while solution 3. adds some network overhead.
is good enought
will not work at many situations, for example wne client is under firewall, NAT, and so on. Server usually accepts incoming connections from everywhere, desktops usualy not
better than 1 just because you will haven't problems when connection is lost
solutions 1+3 - make long waiting connections, with periodical sleep and reconnect after. I mean: connect to server, wait 30 sec for data, if no data received, sleep for 10 sec, loop.
Opening sockets is sometimes expensive, but not so expensive that your data processing.
I see an immediate problem with option 2. If the client is behind a firewall, he might very well be allowed to connect and do the request, but the server might be prevented to connect back to the cilent.
As you say, option 1 looks a bit nasty (not too nasty though, could work well), so among the options listed, I would go for option 3. Perhaps the server could estimate the time that's left of the processing, and hint the client, in each poll, of when it's about time to check back.