First off, let me say that feel free to recommend me if long lived TCP persistent connections are the way to go or persistent HTTP connections are better.
I've also pre-read that instead of having a persistent connection, I can have a polling mechanism.
I'm just asking in the curious interest of how can I create a persistent connection from Android to a server?
Thanks!
This really depends on what your requirements are and if you actually need a persistant connection or not.
If you have time-sensitive data you need to push from the server to the device as soon as it becomes available, then a persistant TCP connection is your best bet.
If it is acceptable that your server and device only periodically exchange information, then polling or HTTP requests may be a better choice.
Personally I think a well-implemented long-lived TCP connection with a binary protocol is the superior choice when dealing with persistant connections where information must always be current.
HTTP connections are generally expensive in terms of overhead for each packet, especially if you are using an XML-based protocol, such as SOAP. Also, connecting and tearing down sockets all the time is generally quite expensive.
On the other hand, persistant TCP connections can be tricky to implement on both the client and server side. Battery life is a huge factor on the device side, and if you are expecting anything more than a handful of users connected at once then you may have to implement an asynchronous communication model on the server side, which brings its own set of challenges.
Good luck!
Long lived TCP connections are a bad idea in anything mobile because the network is so spotty. Personally I use UDP or transient HTTP connections with an HTTP session concept.
Related
I've been looking into making a simple Sockets-based game in Java, and read in multiple places that client sockets are destroyed after a single exchange. Is this good practice for continued connections? The server needs to maintain a connection with a client (i.e. not using socket.accept() every time it wants to tell a client about something), but can't wait every time for the client's response. I already have the server/client running in separate threads, but won't destroying the socket after every exchange mean re-acquiring (or failing to re-acquire) a connection to that client? I've seen so many conflicting websites about sockets in Java and how they should be implemented.
There's no hard and fast rules, but it does depend slightly on what data rates you want to achieve.
For example, YouTube is a streaming video service, but the video data is delivered by means of the client using https to fetch batches of video data. Inefficient, yes, but very easy to program for. There's lots of reasons to use https for an application like YouTube (firewalls, etc), but ultimate power saving and network performance were not one of them. The "proper" way would be to use a protocol like RTP which uses UDP to deliver small packets of data which can then be rearranged into order, you also have to deal with missing frames at the CODEC level, etc. Much less network traffic, friendly to bandwidth constrained network links, but significantly more difficult to deal with traversing across firewalls, in client software, etc.
So if your game is sending modest amounts of data, the only thing wrong with setting up and tearing down a whole socket connection for every message is the nagging feeling you yourself will have that it is somehow not the most efficient solution.
Though it sounds like you have a conflict between the need to communicate between client / server and a need to process something else whilst waiting for the communication to complete. Here you're getting into asynchronous I/O territory. To make that easy i strongly suggest you take a look at ZeroMQ - that will make everything a whole lot simpler.
and read in multiple places that client sockets are destroyed after a single exchange.
Only in the places where that actually happens. There are numerous contexts where it doesn't, the outstanding example being HTTP, where every effort is made to reuse connections.
Is this good practice for continued connections?
The question is a contradiction in terms. A continued connection is a connection that isn't closed. A closed connection can't be continued.
The server needs to maintain a connection with a client (i.e. not using socket.accept() every time it wants to tell a client about something), but can't wait every time for the client's response.
The word you are groping for here is 'session'.
I already have the server/client running in separate threads, but won't destroying the socket after every exchange mean re-acquiring (or failing to re-acquire) a connection to that client?
Yes.
I've seen so many conflicting websites about sockets in Java and how they should be implemented.
You should use a connection pool at the client; a request loop at the server that looks for multiple requests per connection; a client-side facility that closes idle connections after some idle timeout; and a read timeout at the server that closes connections on which no request has been read within the timeout.
I am looking for best way to connect desktop based trading client with Trading server. Latency is most important factor to consider.
We have two option
1 Rest calls: I can call rest service from trading client but i don't think it's good way to do because each call would establish TCP session
2 AMQP(eg:RabbitMQ): We can publish message on RabiitMQ server and server can consume messages from there.
Please suggest which approach is best or is there any other possible approach is well.
Client is in .net and server is java service
A Rest call is probably faster than a message queue call in most cases since the message queue will likely involve disk access.
For minimum latency, establish a direct TCP connection and implement your own protocol.
I'd like to implement a function of realtime message such as chatting in facebook but several questions confuse me:
1. To reduce overhead of server and make it really 'realtime', I should use a full-duplex way of communication like socket instead of Ajax, is that right?
2. If I use socket, which protocol should I choose, TCP or UDP?
3. Assuming that I am using TCP, will server keep trying to resend the lost packages so that it would take much overhead?
4. What if the network failed in a communication between server and a client? Will the socket close it self or I should handle with several kinds of network conditions?
Can anyone help?
You can use WebSockets. XMLHttpRequest is probably obsolete now for anything real-time (because it's not real-time), though you could fall back to using it for people who use a browser that doesn't support WebSockets
Use UDP if the information you are sending is only valid for the time it is sent, for example in games that would be the position of the players (you don't care to receive the position they were in 5 seconds ago). Besides, you can't use UDP with WebSockets
For anything other than that, use TCP (unless you do hole punching to achieve p2p), because loss of data is probably bad for you, and TCP handles that.
You would have to check for and resend lost data manually with UDP anyway, unless failure in communication is acceptable by you
You will get an IOException. If the connection was closed improperly the exception will be thrown after a timeout of unresponsiveness that you are able to change according to your needs. This is assuming you use TCP, otherwise you should figure out yourself when you consider clients connected or disconnected according to the responses/data you receive (or not receive).
I have a Scala application which maintains (or tries to) TCP connections to various servers for hours (possibly > 24) at a time. Each server sends a short, ~30 character message about twice a second. These messages are fed into an iteratee where they are parsed and eventually end up making state changes to a database.
If any of these connections fail for any reason, my app needs to continually try to reconnect until I specify otherwise. Any messages getting lost is Bad. I have no control over the servers I connect to, or the protocols used.
It is conceivable there would be as many as 300 of these connections at once. No exactly a high-load scenario, so I don't think NIO is needed, though it might be nice to have? Other bits of the app are high-load.
I'm looking for some sort of socket controller / manager which can keep these connections as reliably as possible. I am running my own blocking controller now, but as I'm inexperienced with socket coding (and all the various settings, options, timeouts, etc.) I doubt its will achieve the best possible uptime. Plus I may need SSL support at some point down the line.
Would NIO offer any real advantages?
Would Netty be the best choice here? I've seen the Uptime example here, and was thinking of simply duplicating it, but being new to lower-level networking I wasn't sure if there were better options.
However I'm uncertain of the best strategies for ensuring as few packets are lost as possible, and assumed this would be a "solved" problem in one library or another.
Yup. JMS is an example.
I suppose a lot of it would come down to a timeout guessing strategy? Close and re-open a socket too early and you've lost whatever packets were en-route.
That is correct. That approach is not going to be reliable, especially if connections go up and down regularly.
A real solution involves having the other end keep track of what it has received, and letting the sender know when then connection is re-established. If that can't be done, you have no real way of controlling how much gets lost. (This is what the reliable messaging services do ...)
I have no control over the servers I connect to. So unless there's another way to adapt JMS to a generic TCP stream I don't think it will work.
Yup. And the same applies if you try to implement this by hand. The other end has to cooperate.
I guess you could construct something where you run (say) a JMS end point on each of the remote servers, and have the endpoint use UNIX domain sockets or loopback (i.e. 127.0.0.1) to talk to the server. But you still have potential for message loss.
The point of my question is to ask if it is accepted to use both TCP and UDP to communicate between client and server.
I am making a real-time client server game with parts of the communication that need to be guaranteed (logging in, etc..), but other parts will be ok to lose packets (state updates, etc). So, I would like to use UDP for most of the data communication but I do not want to have to implement my own framework to insure that my control communication (logging in) is guaranteed.
So, would it be reasonable to initially use TCP to manage a connection, and then on a separate port send data communication pack and forth?
You should absolutely do it that way (use TCP and UDP to accomplish different communication tasks.) And you don't even have to use two different ports. One will suffice. You can listen to the two different protocols on the same port.
It is quite reasonable and already used in mainstream. Even when browsing the Web, DNS operations are UDP-based and HTTP connections are TCP-based.
Keep in mind that you should either consider the two connection types to be completely independent or employ additional measures to properly handle any inter-dependencies. TCP connections can have timing issues at the OS and network levels and UDP connections have packet loss issues. You should take specific measures to avoid deadlocks and performance problems when the TCP part of your application stalls or a UDP packet is lost.
It is not only accepted but is widely used. As a good example, BATS Exchange is using this approach in their market data distribution system, to implement a recovery mechanisms.