I've been looking into making a simple Sockets-based game in Java, and read in multiple places that client sockets are destroyed after a single exchange. Is this good practice for continued connections? The server needs to maintain a connection with a client (i.e. not using socket.accept() every time it wants to tell a client about something), but can't wait every time for the client's response. I already have the server/client running in separate threads, but won't destroying the socket after every exchange mean re-acquiring (or failing to re-acquire) a connection to that client? I've seen so many conflicting websites about sockets in Java and how they should be implemented.
There's no hard and fast rules, but it does depend slightly on what data rates you want to achieve.
For example, YouTube is a streaming video service, but the video data is delivered by means of the client using https to fetch batches of video data. Inefficient, yes, but very easy to program for. There's lots of reasons to use https for an application like YouTube (firewalls, etc), but ultimate power saving and network performance were not one of them. The "proper" way would be to use a protocol like RTP which uses UDP to deliver small packets of data which can then be rearranged into order, you also have to deal with missing frames at the CODEC level, etc. Much less network traffic, friendly to bandwidth constrained network links, but significantly more difficult to deal with traversing across firewalls, in client software, etc.
So if your game is sending modest amounts of data, the only thing wrong with setting up and tearing down a whole socket connection for every message is the nagging feeling you yourself will have that it is somehow not the most efficient solution.
Though it sounds like you have a conflict between the need to communicate between client / server and a need to process something else whilst waiting for the communication to complete. Here you're getting into asynchronous I/O territory. To make that easy i strongly suggest you take a look at ZeroMQ - that will make everything a whole lot simpler.
and read in multiple places that client sockets are destroyed after a single exchange.
Only in the places where that actually happens. There are numerous contexts where it doesn't, the outstanding example being HTTP, where every effort is made to reuse connections.
Is this good practice for continued connections?
The question is a contradiction in terms. A continued connection is a connection that isn't closed. A closed connection can't be continued.
The server needs to maintain a connection with a client (i.e. not using socket.accept() every time it wants to tell a client about something), but can't wait every time for the client's response.
The word you are groping for here is 'session'.
I already have the server/client running in separate threads, but won't destroying the socket after every exchange mean re-acquiring (or failing to re-acquire) a connection to that client?
Yes.
I've seen so many conflicting websites about sockets in Java and how they should be implemented.
You should use a connection pool at the client; a request loop at the server that looks for multiple requests per connection; a client-side facility that closes idle connections after some idle timeout; and a read timeout at the server that closes connections on which no request has been read within the timeout.
Related
I have to write a program in which clients send the server some number and wait to its response, other random number. It works Infinitely-send number and wait for response and so on...
I would like to write a server which gets a lot of connections ( and creates sockets) how can I do that in effeicient way (without creating thread to every socket created)?
Is it better to open and close sockets for every request and response?
Is there a way to send answer over a socket when I don't know which one is the right socket, but I know that all the sockets starts from the same client computer and I know the port source of the client
(I thought about making sockets array)
how can I do that in effeicient way (without creating thread to every socket created)?
You are assuming without proof that a new thread per socket is inefficient. It isn't.
Is it better to open and close sockets for every request and response?
No. Take a look at the history of HTTP. The major change between 1.0 and 1.1 was the introduction of persistent connections, which was done regardless of server-side architectures.
Is there a way to send answer over a socket when I don't know which one is the right socket
I don't understand how that situation could possibly arise. The answer only makes sense in the context of a specific session, which is associated with a specific socket. If you aren't retaining that information you should be. It's just a data structure problem.
but I know that all the sockets starts from the same client computer and I know the port source of the client (I thought about making sockets array)
If you can remember the source port you can remember the socket itself. Again, this is just a data structure problem. And there is no need for the assumption/constraint that all connections are from the same client. And unless that client is multi-threaded there is no need for multiple connections from it at all.
I'd like to implement a function of realtime message such as chatting in facebook but several questions confuse me:
1. To reduce overhead of server and make it really 'realtime', I should use a full-duplex way of communication like socket instead of Ajax, is that right?
2. If I use socket, which protocol should I choose, TCP or UDP?
3. Assuming that I am using TCP, will server keep trying to resend the lost packages so that it would take much overhead?
4. What if the network failed in a communication between server and a client? Will the socket close it self or I should handle with several kinds of network conditions?
Can anyone help?
You can use WebSockets. XMLHttpRequest is probably obsolete now for anything real-time (because it's not real-time), though you could fall back to using it for people who use a browser that doesn't support WebSockets
Use UDP if the information you are sending is only valid for the time it is sent, for example in games that would be the position of the players (you don't care to receive the position they were in 5 seconds ago). Besides, you can't use UDP with WebSockets
For anything other than that, use TCP (unless you do hole punching to achieve p2p), because loss of data is probably bad for you, and TCP handles that.
You would have to check for and resend lost data manually with UDP anyway, unless failure in communication is acceptable by you
You will get an IOException. If the connection was closed improperly the exception will be thrown after a timeout of unresponsiveness that you are able to change according to your needs. This is assuming you use TCP, otherwise you should figure out yourself when you consider clients connected or disconnected according to the responses/data you receive (or not receive).
I have a Scala application which maintains (or tries to) TCP connections to various servers for hours (possibly > 24) at a time. Each server sends a short, ~30 character message about twice a second. These messages are fed into an iteratee where they are parsed and eventually end up making state changes to a database.
If any of these connections fail for any reason, my app needs to continually try to reconnect until I specify otherwise. Any messages getting lost is Bad. I have no control over the servers I connect to, or the protocols used.
It is conceivable there would be as many as 300 of these connections at once. No exactly a high-load scenario, so I don't think NIO is needed, though it might be nice to have? Other bits of the app are high-load.
I'm looking for some sort of socket controller / manager which can keep these connections as reliably as possible. I am running my own blocking controller now, but as I'm inexperienced with socket coding (and all the various settings, options, timeouts, etc.) I doubt its will achieve the best possible uptime. Plus I may need SSL support at some point down the line.
Would NIO offer any real advantages?
Would Netty be the best choice here? I've seen the Uptime example here, and was thinking of simply duplicating it, but being new to lower-level networking I wasn't sure if there were better options.
However I'm uncertain of the best strategies for ensuring as few packets are lost as possible, and assumed this would be a "solved" problem in one library or another.
Yup. JMS is an example.
I suppose a lot of it would come down to a timeout guessing strategy? Close and re-open a socket too early and you've lost whatever packets were en-route.
That is correct. That approach is not going to be reliable, especially if connections go up and down regularly.
A real solution involves having the other end keep track of what it has received, and letting the sender know when then connection is re-established. If that can't be done, you have no real way of controlling how much gets lost. (This is what the reliable messaging services do ...)
I have no control over the servers I connect to. So unless there's another way to adapt JMS to a generic TCP stream I don't think it will work.
Yup. And the same applies if you try to implement this by hand. The other end has to cooperate.
I guess you could construct something where you run (say) a JMS end point on each of the remote servers, and have the endpoint use UNIX domain sockets or loopback (i.e. 127.0.0.1) to talk to the server. But you still have potential for message loss.
I have a typical client server communication - Client sends data to the server, server processes that, and returns data to the client. The problem is that the process operation can take quite some time - order of magnitude - minutes. There are a few approaches that could be used to solve this.
Establish a connection, and keep it alive, until the operation is finished and the client receives the response.
Establish connection, send data, close the connection. Now the processing takes place and once it is finished the server could establish a connection to the client to send the data.
Establish a connection, send data, close the connection. Processing takes place. client asks server, every n minutes/seconds if the operation is finished. If the processing is finished the client fetches the data.
I was wondering which approach would be the best way to use. Is there maybe some "de facto" standard for solving this problem? How "expensive" is opening a socket in Java? Solution 1. seems pretty nasty to me, but 2. and 3. could do. The problem with solution 2. is that the server needs to know on which port the client is listening, while solution 3. adds some network overhead.
is good enought
will not work at many situations, for example wne client is under firewall, NAT, and so on. Server usually accepts incoming connections from everywhere, desktops usualy not
better than 1 just because you will haven't problems when connection is lost
solutions 1+3 - make long waiting connections, with periodical sleep and reconnect after. I mean: connect to server, wait 30 sec for data, if no data received, sleep for 10 sec, loop.
Opening sockets is sometimes expensive, but not so expensive that your data processing.
I see an immediate problem with option 2. If the client is behind a firewall, he might very well be allowed to connect and do the request, but the server might be prevented to connect back to the cilent.
As you say, option 1 looks a bit nasty (not too nasty though, could work well), so among the options listed, I would go for option 3. Perhaps the server could estimate the time that's left of the processing, and hint the client, in each poll, of when it's about time to check back.
I have a J2ME app running on my mobile phone(client),
I would like to open an HTTP connection with the server and keep polling for updated information on the server.
Every poll performed will use up GPRS bytes and would turn out expensive in the long run, as GPRS billing is based on packets sent and received.
Is there a byte efficient way of polling using the HTTP protocol?.
I have also heard of long polling, But I am not sure how it works and how efficient it would be.
Actually the preffered way would be for the Server to tell the phone app that new data is ready to be used that way polling won't be needed to be done, however I don't know of these techniques especially in J2ME.
If you want solve this problem using HTTP only, long polling would be the best way. It's fairly easy. First you need to setup an URL on server side for notification (e.g. http://example.com/notify), and define a notification protocol. The protocol can be as simply as some text lines and each line is an event. For example,
MSG user1
PHOTO user2 album1
EMAIL user1
HEARTBEAT 300
The polling thread on the phone works like this,
Make a HTTP connection to notification URL. In J2ME, you can use GCF HttpConnection.
The server will block if no events to push.
If the server responds, get each line and spawn a new thread to notify the application and loopback to #1.
If the connection closes for any reason, sleep for a while and go back to step 1.
You have to pay attention to following implementation details,
Tune HTTP timeouts on both client and server. The longer the timeout, the more efficient. Timed out connection will cause a reconnect.
Enable HTTP keepalive on both the phone and the server. TCP's 3-way handshake is expensive in GPRS term so try to avoid it.
Detect stale connections. In mobile environments, it's very easy to get stale HTTP connections (connection is gone but polling thread is still waiting). You can use heartbeats to recover. Say heartbeat rate is 5 minutes. Server should send a notification in every 5 minutes. If no data to push, just send HEARTBEAT. On the phone, the polling thread should try to close and reopen the polling connection if nothing received for 5 minutes.
Handling connectivity errors carefully. Long polling doesn't work well when there are connectivity issues. If not handled properly, it can be the deal-breaker. For example, you can waste lots of packets on Step 4 if the sleep is not long enough. If possible, check GPRS availability on the phone and put the polling thread on hold when GPRS is not available to save battery.
Server cost can be very high if not implemented properly. For example, if you use Java servlet, every running application will have at least one corresponding polling connection and its thread. Depending on the number of users, this can kill a Tomcat quickly :) You need to use resource efficient technologies, like Apache Mina.
I was told there are other more efficient ways to push notifications to the phone, like using SMS and some IP-level tricks. But you either have to do some low level non-portable programming or run into risks of patent violations. Long polling is probably the best you can get with a HTTP only solution.
I don't know exactly what you mean by "polling", do you mean something like IMAP IDLE?
A connection stays open and there is no overhead for building up the connection itself again and again. As stated, another possible solution is the HEAD Header of a HTTP Request (forgot it, thanks!).
Look into this tutorial for the basic of HTTP Connections in J2ME.
Pushing data to an application/device without Push Support (like a Blackberry) is not possible.
The HEAD HTTP request is the method that HTTP provides if you want to check if a page has changed or not, it is used by browsers and proxy servers to check whether a page has been updated or not without consuming much bandwidth.
In HTTP terms, the HEAD request is the same as GET without the body, I assume this would be only a couple hundred bytes at most which looks acceptable if your polls are not very frequent.
The best way to do this is to use socket connection. Many application like GMail use them.