I have a typical client server communication - Client sends data to the server, server processes that, and returns data to the client. The problem is that the process operation can take quite some time - order of magnitude - minutes. There are a few approaches that could be used to solve this.
Establish a connection, and keep it alive, until the operation is finished and the client receives the response.
Establish connection, send data, close the connection. Now the processing takes place and once it is finished the server could establish a connection to the client to send the data.
Establish a connection, send data, close the connection. Processing takes place. client asks server, every n minutes/seconds if the operation is finished. If the processing is finished the client fetches the data.
I was wondering which approach would be the best way to use. Is there maybe some "de facto" standard for solving this problem? How "expensive" is opening a socket in Java? Solution 1. seems pretty nasty to me, but 2. and 3. could do. The problem with solution 2. is that the server needs to know on which port the client is listening, while solution 3. adds some network overhead.
is good enought
will not work at many situations, for example wne client is under firewall, NAT, and so on. Server usually accepts incoming connections from everywhere, desktops usualy not
better than 1 just because you will haven't problems when connection is lost
solutions 1+3 - make long waiting connections, with periodical sleep and reconnect after. I mean: connect to server, wait 30 sec for data, if no data received, sleep for 10 sec, loop.
Opening sockets is sometimes expensive, but not so expensive that your data processing.
I see an immediate problem with option 2. If the client is behind a firewall, he might very well be allowed to connect and do the request, but the server might be prevented to connect back to the cilent.
As you say, option 1 looks a bit nasty (not too nasty though, could work well), so among the options listed, I would go for option 3. Perhaps the server could estimate the time that's left of the processing, and hint the client, in each poll, of when it's about time to check back.
Related
I am developing a nodemcu websocket server android client app using java.i successfully created client and connected to it through a websocket client service.i can detect server failure/closed when sending data.but can't detect it at the time of failure that is if server powered off cant know untill some data is send.how to know the server failure at the time of failure.using okhttp 4.1.0 library.can anyone help
how to know the server failure at the time of failure.using okhttp 4.1.0 library.can anyone help
You can't. It's not possible, but, there are workarounds, see below.
Why isn't it possible? Internally, the internet is packet switched, which means data is first gathered up into packets, and then these packets are sent.
Most of the stuff you do on the web feels like it is 'streams' instead (you send 1 character, and one character arrives on the other side). But that's all based on protocols that are built on top of the packet nature of the internet.
When you have an open connection between 2 computers via the internet, no data is actually being sent, at all. It's not like you have a line reserved. Old telephone networks did work like that: When you dialled somebody, you got a dedicated line, and once the line got interrupted, you'd hear beeps to indicate this.
That is not how the internet works. Those wires and everything in between have no idea that there is an open connection at all. That's just some bits in memory on your computer and on the server which lets them identify certain packets as part of the longer conversation those 2 machines were having, is all.
Thus we arrive at why this isn't possible: Given that no packets are flowing whatsoever until one side actually sends data to the other, it is impossible to tell the difference between 'no data being sent right now' and 'somebody tripped over the power cable in the server park'. That's why you don't get that info until you send something (and the reason you get that is only because when you send something, the protocol dictates that the server sends you back a confirmation of receiving what you sent. If that takes too long, your computer will send it a few more times just in case the packet just got lost somewhere, and will eventually give up and conclude that the server can no longer be reached or crashed or lost power, and only then do you get the IOException).
Workarounds
A simple one is to upgrade your own protocol: Dictate that the server or client (doesn't matter who takes the responsibility to do this) sends a do-nothing message at least once a minute. You can then conclude after not receiving that for 100 seconds or so that the connection is probably dead. You can start a timer for 100 seconds, reset it every time you receive any data whatsoever. If the timer ever runs out? Connection is likely dead.
This is somewhat take on this idea built into the protocol that lets you make connections that feel like streams of data. That protocol is called TCP/IP, and the feature is called KeepAlive.
The problem is, you possibly don't get to dictate the TCP/IP settings for your websocket connection. If you can, you can turn on keepalive (for example in java, you use Socket to make raw TCP/IP connections, and it has a .setSoKeepAlive(true) method. Check the API if you can get at the socket or otherwise scan the docs for 'keepalive' and see if there's anything there.
I bet there won't be, which means you have to use the trick I mentioned above: Update your server code to use a timer to send a 'hello!' 60 seconds after any conversation, and update your client code to give up on the connection once 100 seconds have passed (give it 40 additional seconds; sometimes the internet gets a little backed up or servers get a little busy).
I've got a Java program that opens a TCP stream and connects to a listening port on a remote server. I send a request to the server and I receive a response. I then let the stream sit idle for 60 minutes. At that point if I write a new request it will not arrive at the server. In short order TCP/IP will let me know that the connection has gone away.
My client code is running on a Windows laptop which is connected to a corporate environment via a VPN router. The server is whirring away up in Canada, far away from me here in central Massachusetts USA. I'm likely being routed through multiple pieces of networking equipment. I have no idea which one is causing the stream to break. (I keep thinking of Ghostbusters and "Don't cross the streams!")
What is the best term to use when a piece of equipment specifically "forgets" about a TCP connection which has been idle, causing it to break? Is that half-open, half-closed, or just plain gone?
I want to be able to simulate this timeout scenario entirely within my home lab so that I can perform easier testing -- for example where I don't have to wait for 60 minutes! What's a good technique, and what is the appropriate equipment I should use to simulate this "disconnect"? I've got extra switches here at home, as well as one old (and fiesty!) WRT router that could use some lovin'.
I do not want to enable keepalive to mask the problem. Keepalive won't prevent all possible stream disconnection scenarios, AFAIK. I want to do the best that I can at letting the problem occur and handling it quickly and cleanly when it does.
Thank you very much,
Bill S
I've been looking into making a simple Sockets-based game in Java, and read in multiple places that client sockets are destroyed after a single exchange. Is this good practice for continued connections? The server needs to maintain a connection with a client (i.e. not using socket.accept() every time it wants to tell a client about something), but can't wait every time for the client's response. I already have the server/client running in separate threads, but won't destroying the socket after every exchange mean re-acquiring (or failing to re-acquire) a connection to that client? I've seen so many conflicting websites about sockets in Java and how they should be implemented.
There's no hard and fast rules, but it does depend slightly on what data rates you want to achieve.
For example, YouTube is a streaming video service, but the video data is delivered by means of the client using https to fetch batches of video data. Inefficient, yes, but very easy to program for. There's lots of reasons to use https for an application like YouTube (firewalls, etc), but ultimate power saving and network performance were not one of them. The "proper" way would be to use a protocol like RTP which uses UDP to deliver small packets of data which can then be rearranged into order, you also have to deal with missing frames at the CODEC level, etc. Much less network traffic, friendly to bandwidth constrained network links, but significantly more difficult to deal with traversing across firewalls, in client software, etc.
So if your game is sending modest amounts of data, the only thing wrong with setting up and tearing down a whole socket connection for every message is the nagging feeling you yourself will have that it is somehow not the most efficient solution.
Though it sounds like you have a conflict between the need to communicate between client / server and a need to process something else whilst waiting for the communication to complete. Here you're getting into asynchronous I/O territory. To make that easy i strongly suggest you take a look at ZeroMQ - that will make everything a whole lot simpler.
and read in multiple places that client sockets are destroyed after a single exchange.
Only in the places where that actually happens. There are numerous contexts where it doesn't, the outstanding example being HTTP, where every effort is made to reuse connections.
Is this good practice for continued connections?
The question is a contradiction in terms. A continued connection is a connection that isn't closed. A closed connection can't be continued.
The server needs to maintain a connection with a client (i.e. not using socket.accept() every time it wants to tell a client about something), but can't wait every time for the client's response.
The word you are groping for here is 'session'.
I already have the server/client running in separate threads, but won't destroying the socket after every exchange mean re-acquiring (or failing to re-acquire) a connection to that client?
Yes.
I've seen so many conflicting websites about sockets in Java and how they should be implemented.
You should use a connection pool at the client; a request loop at the server that looks for multiple requests per connection; a client-side facility that closes idle connections after some idle timeout; and a read timeout at the server that closes connections on which no request has been read within the timeout.
I have a typical java client and a server. The client sends some request to the server and waits for the response. The client reads up to say 100 bytes of data from the contained input stream into an array of bytes. It waits for the complete response of 100 bytes to be read within a specified timeout period of say 3 secs. The problem here is to identify if the server went down or crashed while/before writing the response. Basically, we need to identify if the socket was broken or the peer disconnected for some reason. Is there a way to identify this?
How to identify a broken socket connection in Java immediately?
You can't detect it immediately, in Java or any other language. TCP/IP doesn't know, so Java can't know. The only sure way to detect a broken TCP connection is by writing to it and catching IOExceptions, and they won't happen immediately.
The best way to identity the connection is down is to timeout the connection. i.e. you expect a response in a given amount of time and flag if that response does not come as you expect.
When you have a graceful disconnection (.e.g the other end calls close()) the read on the connection will let you know once the buffer has been drained.
However, if there some other type of failure, you might not be notified until the OS times out the connection (e.g. after 3 minutes) and indeed, you may want to keep the connection. e.g. if you pull the network cable out for 10 seconds and put it back in, that doesn't need to be a failure.
EDIT: I don't believe its a good idea to be too aggressive in automatically handling connection/service "failures". This is usually better handled by a planned fix to the system, based on investigation of the true cause. e.g. increased bandwidth, redundant connectivity, faster servers, code fixes.
If connection is broken abnormally, you will receieve IOException when reading; that normally happens quite fast, but there is no guarantees about time - all depends on the OS, network hardware, etc. If remote end gracefully closes the socket, you'll read -1 as next byte.
Assuming everything else works, if the remote peer - the TCP server - was killed then the TCP client will normally receive a TCP RST (reset) and you'll get an IOException in your client application.
However, there are lots of other things that can go wrong besides a process being killed. Basically anything on the network path between the two processes: a cable is yanked, a router dies, a firewall dies, etc. All of this will not immediately be detected.
For the above reasons the general rule is - as pointed out in the answer from EJP - that a broken connection can only be detected by writing to it. This is why it is always recommended that a TCP client and TCP server exchange some type of heartbeat messages at regular intervals. There are different ways to do this. I like best the method where the TCP client will - in the absence of data being received from the TCP server - send a heartbeat message to the server and expect a reply back within a certain time period. This way heartbeat messages will only be sent when really needed.
A sub-optimal approach - if you cannot implement true heartbeating - is to always read with a timeout. Set the timeout on the socket and then catch java.net.SocketTimeoutException. This will allow you to know that no data has been received on socket during x milliseconds.
It should be mentioned that there's one scenario where you don't have to use heartbeating, nor using the socket timeout: if the TCP client and the TCP server communicate over a loopback interface then a broken connection will always be propagated to both the TCP client application and the TCP server application. This is because, in this case, there's really no network infrastructure between the two processes. So if you have an existing application which isn't well-designed with respect to its TCP communication (i.e. it doesn't implement some form of heartbeating or at least reading with a timeout), then as a last resort you may 'fix' the problem by moving the two application onto the same host and let them communicate over the loopback interface.
I have a J2ME app running on my mobile phone(client),
I would like to open an HTTP connection with the server and keep polling for updated information on the server.
Every poll performed will use up GPRS bytes and would turn out expensive in the long run, as GPRS billing is based on packets sent and received.
Is there a byte efficient way of polling using the HTTP protocol?.
I have also heard of long polling, But I am not sure how it works and how efficient it would be.
Actually the preffered way would be for the Server to tell the phone app that new data is ready to be used that way polling won't be needed to be done, however I don't know of these techniques especially in J2ME.
If you want solve this problem using HTTP only, long polling would be the best way. It's fairly easy. First you need to setup an URL on server side for notification (e.g. http://example.com/notify), and define a notification protocol. The protocol can be as simply as some text lines and each line is an event. For example,
MSG user1
PHOTO user2 album1
EMAIL user1
HEARTBEAT 300
The polling thread on the phone works like this,
Make a HTTP connection to notification URL. In J2ME, you can use GCF HttpConnection.
The server will block if no events to push.
If the server responds, get each line and spawn a new thread to notify the application and loopback to #1.
If the connection closes for any reason, sleep for a while and go back to step 1.
You have to pay attention to following implementation details,
Tune HTTP timeouts on both client and server. The longer the timeout, the more efficient. Timed out connection will cause a reconnect.
Enable HTTP keepalive on both the phone and the server. TCP's 3-way handshake is expensive in GPRS term so try to avoid it.
Detect stale connections. In mobile environments, it's very easy to get stale HTTP connections (connection is gone but polling thread is still waiting). You can use heartbeats to recover. Say heartbeat rate is 5 minutes. Server should send a notification in every 5 minutes. If no data to push, just send HEARTBEAT. On the phone, the polling thread should try to close and reopen the polling connection if nothing received for 5 minutes.
Handling connectivity errors carefully. Long polling doesn't work well when there are connectivity issues. If not handled properly, it can be the deal-breaker. For example, you can waste lots of packets on Step 4 if the sleep is not long enough. If possible, check GPRS availability on the phone and put the polling thread on hold when GPRS is not available to save battery.
Server cost can be very high if not implemented properly. For example, if you use Java servlet, every running application will have at least one corresponding polling connection and its thread. Depending on the number of users, this can kill a Tomcat quickly :) You need to use resource efficient technologies, like Apache Mina.
I was told there are other more efficient ways to push notifications to the phone, like using SMS and some IP-level tricks. But you either have to do some low level non-portable programming or run into risks of patent violations. Long polling is probably the best you can get with a HTTP only solution.
I don't know exactly what you mean by "polling", do you mean something like IMAP IDLE?
A connection stays open and there is no overhead for building up the connection itself again and again. As stated, another possible solution is the HEAD Header of a HTTP Request (forgot it, thanks!).
Look into this tutorial for the basic of HTTP Connections in J2ME.
Pushing data to an application/device without Push Support (like a Blackberry) is not possible.
The HEAD HTTP request is the method that HTTP provides if you want to check if a page has changed or not, it is used by browsers and proxy servers to check whether a page has been updated or not without consuming much bandwidth.
In HTTP terms, the HEAD request is the same as GET without the body, I assume this would be only a couple hundred bytes at most which looks acceptable if your polls are not very frequent.
The best way to do this is to use socket connection. Many application like GMail use them.