The problem: I'm having some packet loss internally. I mean internally because did capture all the traffic with wireshark and confirm the packet arrived at server, but did not arrive at channelRead0 method.
Scenario:
I built a SIP Server using Netty. The system uses UDP to communicate with other sip endpoints and works fine at low load.
My doubt is about design. Since SIP is a session protocol, on every packet received, I need to check what session it belongs to. The heavy workload surely is on the synchronized list that holds all sessions (I know need to optimize this on the future).
The whole system logic is inside channelRead0 method and this probably is the reason i'm losing some packets. The problem start to happens at around 500 pkt/sec.
There is no database connection (yet), the only I/O is writing log to a file which has almost no impact.
The question: How should I proper design this to handle 5000 pkts/sec? Maybe put all packets in a synchronized queue and handle them later?
Thanks for all help
Related
Currently I got a situation which client will totally disconnect without sending an EOF(Such as the client is a phone and suddenly change network for wifi to 4G), but my server will still send message to this client. This will take at least 10 mins until server found out the peer is unreachable.
So is there an option in Java to reduce sending timeout, just like the SO_SNDTIMEO in C?
Android docs are pretty much straightforward with what they have: https://developer.android.com/reference/java/net/SocketOptions.html
SO_TIMEOUT is among the list, but it applies to reading operations only. Send operation completion usually doesn't indicate that a packet has been received by the remote host, but rather indicates that the packet has been accepted by kernel's network queue and will be sent "soon".
I won't blame Android team for not having (or at least not advertising) a socket option for sending timeout, because you don't get much information from completion of a send. It's actually up to the application level to detect disconnects. Enhance your protocol, introduce app level keepalives, try non-blocking socket mode to avoid long operations, keep track of what was actually received by a remote host - send is not enough. This will result in a much more robust application.
I've been looking into making a simple Sockets-based game in Java, and read in multiple places that client sockets are destroyed after a single exchange. Is this good practice for continued connections? The server needs to maintain a connection with a client (i.e. not using socket.accept() every time it wants to tell a client about something), but can't wait every time for the client's response. I already have the server/client running in separate threads, but won't destroying the socket after every exchange mean re-acquiring (or failing to re-acquire) a connection to that client? I've seen so many conflicting websites about sockets in Java and how they should be implemented.
There's no hard and fast rules, but it does depend slightly on what data rates you want to achieve.
For example, YouTube is a streaming video service, but the video data is delivered by means of the client using https to fetch batches of video data. Inefficient, yes, but very easy to program for. There's lots of reasons to use https for an application like YouTube (firewalls, etc), but ultimate power saving and network performance were not one of them. The "proper" way would be to use a protocol like RTP which uses UDP to deliver small packets of data which can then be rearranged into order, you also have to deal with missing frames at the CODEC level, etc. Much less network traffic, friendly to bandwidth constrained network links, but significantly more difficult to deal with traversing across firewalls, in client software, etc.
So if your game is sending modest amounts of data, the only thing wrong with setting up and tearing down a whole socket connection for every message is the nagging feeling you yourself will have that it is somehow not the most efficient solution.
Though it sounds like you have a conflict between the need to communicate between client / server and a need to process something else whilst waiting for the communication to complete. Here you're getting into asynchronous I/O territory. To make that easy i strongly suggest you take a look at ZeroMQ - that will make everything a whole lot simpler.
and read in multiple places that client sockets are destroyed after a single exchange.
Only in the places where that actually happens. There are numerous contexts where it doesn't, the outstanding example being HTTP, where every effort is made to reuse connections.
Is this good practice for continued connections?
The question is a contradiction in terms. A continued connection is a connection that isn't closed. A closed connection can't be continued.
The server needs to maintain a connection with a client (i.e. not using socket.accept() every time it wants to tell a client about something), but can't wait every time for the client's response.
The word you are groping for here is 'session'.
I already have the server/client running in separate threads, but won't destroying the socket after every exchange mean re-acquiring (or failing to re-acquire) a connection to that client?
Yes.
I've seen so many conflicting websites about sockets in Java and how they should be implemented.
You should use a connection pool at the client; a request loop at the server that looks for multiple requests per connection; a client-side facility that closes idle connections after some idle timeout; and a read timeout at the server that closes connections on which no request has been read within the timeout.
I'm currently developing a Java WebSocket Client Application and I have to make sure that every message from the server is received by the client. Is it possible that I lose some messages (once they are sent from the server) due to a connection interruption? WebSocket is based on TCP so this shouldn't happen right?
It can happen. TCP guarantees the order of packets, but it does not mean that all packets sent from a server reach a client even when an unrecoverable trouble happens in an underlying network. Imagine someone pulls out your LAN cable or switches off your WiFi access point at the worst timing while your application is communicating with your server. TCP does not overcome such a trouble.
To ensure that every WebSocket message sent from your server reaches your client, you have to implement some kind of SYN/ACK in the application layer.
TCP is a guaranteed protocol - packets will be received in the correct order by the higher application levels at the far end (this is as opposed to UDP which is a send and hope protocol).
Generally speaking TCP should be be used for connections where all the data must arrive correctly at the far end. UDP is used where a missing packet can be dropped without significant issue (e.g. streaming services, NTP updates)
In my game, to counter missed web socket messages, I added an int/long ID for each message. When the client detects that something is wrong in the sequence of IDs it receives, the client will request for new data from the server to be able to recover properly.
TCP has something called Control Flow- which means it provides reliable, ordered, and error-checked delivery.
In other words TCP is a protocol that checks constantly whether the data arrived.
This protocol has different mechanisms to ensure that.
You can see the difference between TCP and UDP (which has no control flow) in the link below.
Difference between tcp and udp
I need to implement client/server instant messenger using pure sockets in Java lang.
The server should serve large number of clients and I need to decide which sockets should I use - TCP or UDP.
Thanks, Costa.
TCP
Reason:
TCP: "There is absolute guarantee that the data transferred remains intact and arrives in the same order in which it was sent."
UDP: "There is no guarantee that the messages or packets sent would reach at all."
Learn more at: http://www.diffen.com/difference/TCP_vs_UDP
Would you want your chat message possibly lost?
Edit: I missed the part about "large chat program". I think because of the nature of the chat program it needs to be a TCP server, I cannot imagine the actual text content sent by users over a UDP protocol.
The max limit for TCP servers is 65536 connections at the same time. If you really need to go past that number you could create a dispatcher server that would send incoming connects to the appropriate server depending on current server loads.
You could use both. Use TCP for exchanging the actual messages, (so no data lost and streaming large messages, (eg. containing jpegs etc), is possible. Use UDP only for sending short 'connectNow' messages to clients for which there are messages queued. The clients could have states like (NotLoggedIn, TCPconnected, TCPdisconnected, LoggedOut) with various timeouts to control the state transitions as well as the normal message-exchange events. The UDP 'connectNow' message would instruct clients in 'TCPdisconnected' to connect and so move to 'TCPconnected', where they would stay, exchanging messages, until some inactivity timer instructs the client to disconnect for now. This would, of course, be unreliable and so you may wish to repeat the 'connectNow' message every X seconds for N times until the client connects. The client should, in any case, attempt a poll every X minutes, just in case...
It depends whether the user needs to know if the messages have been delivered to the server. UDP packets have no inherent acknowledgement. If the client sends an IM message to the server and it gets lost in transit, neither the client or the server will know about it.
(The short answer is "use TCP" ... but it is worth thinking through the design implications for yourself.)
TCP would give you reliability, which is certainly desirable when during instant messaging -- you would not want messages to be dropped during converstation.
However, if you intend on using group messaging, then you might end up using mulitcast. For such cases, UDP would be the right chioce since UDP can handle point to multipoint. Using TCP for multicast applications would be hard since now the sender would have to keep track of retransmissions/sending rate for multiple receivers. One alternative could be to use TCP for point-to-point chat and use UDP for group messaging.
I have a J2ME app running on my mobile phone(client),
I would like to open an HTTP connection with the server and keep polling for updated information on the server.
Every poll performed will use up GPRS bytes and would turn out expensive in the long run, as GPRS billing is based on packets sent and received.
Is there a byte efficient way of polling using the HTTP protocol?.
I have also heard of long polling, But I am not sure how it works and how efficient it would be.
Actually the preffered way would be for the Server to tell the phone app that new data is ready to be used that way polling won't be needed to be done, however I don't know of these techniques especially in J2ME.
If you want solve this problem using HTTP only, long polling would be the best way. It's fairly easy. First you need to setup an URL on server side for notification (e.g. http://example.com/notify), and define a notification protocol. The protocol can be as simply as some text lines and each line is an event. For example,
MSG user1
PHOTO user2 album1
EMAIL user1
HEARTBEAT 300
The polling thread on the phone works like this,
Make a HTTP connection to notification URL. In J2ME, you can use GCF HttpConnection.
The server will block if no events to push.
If the server responds, get each line and spawn a new thread to notify the application and loopback to #1.
If the connection closes for any reason, sleep for a while and go back to step 1.
You have to pay attention to following implementation details,
Tune HTTP timeouts on both client and server. The longer the timeout, the more efficient. Timed out connection will cause a reconnect.
Enable HTTP keepalive on both the phone and the server. TCP's 3-way handshake is expensive in GPRS term so try to avoid it.
Detect stale connections. In mobile environments, it's very easy to get stale HTTP connections (connection is gone but polling thread is still waiting). You can use heartbeats to recover. Say heartbeat rate is 5 minutes. Server should send a notification in every 5 minutes. If no data to push, just send HEARTBEAT. On the phone, the polling thread should try to close and reopen the polling connection if nothing received for 5 minutes.
Handling connectivity errors carefully. Long polling doesn't work well when there are connectivity issues. If not handled properly, it can be the deal-breaker. For example, you can waste lots of packets on Step 4 if the sleep is not long enough. If possible, check GPRS availability on the phone and put the polling thread on hold when GPRS is not available to save battery.
Server cost can be very high if not implemented properly. For example, if you use Java servlet, every running application will have at least one corresponding polling connection and its thread. Depending on the number of users, this can kill a Tomcat quickly :) You need to use resource efficient technologies, like Apache Mina.
I was told there are other more efficient ways to push notifications to the phone, like using SMS and some IP-level tricks. But you either have to do some low level non-portable programming or run into risks of patent violations. Long polling is probably the best you can get with a HTTP only solution.
I don't know exactly what you mean by "polling", do you mean something like IMAP IDLE?
A connection stays open and there is no overhead for building up the connection itself again and again. As stated, another possible solution is the HEAD Header of a HTTP Request (forgot it, thanks!).
Look into this tutorial for the basic of HTTP Connections in J2ME.
Pushing data to an application/device without Push Support (like a Blackberry) is not possible.
The HEAD HTTP request is the method that HTTP provides if you want to check if a page has changed or not, it is used by browsers and proxy servers to check whether a page has been updated or not without consuming much bandwidth.
In HTTP terms, the HEAD request is the same as GET without the body, I assume this would be only a couple hundred bytes at most which looks acceptable if your polls are not very frequent.
The best way to do this is to use socket connection. Many application like GMail use them.