Non-blocking UDP I/O vs blocking UDP I/O in Java - java

Non-blocking TCP/IP SocketChannels and Selector in NIO help me to handle many TCP/IP connections with small number of threads. But how about UDP DatagramChannels? (I must admit that I'm not very familiar with UDP.)
UDP send operations don't seem to block even if the DatagramChannel is not operating in blocking mode. Is there really a case where DatagramSocket.send(DatagramPacket) blocks due to congestion or something similar? I'm really curious if there's such a case and what the possible cases exist in a production environment.
If DatagramSocket.send(DatagramPacket) doesn't actually block and I am not going to use a connected DatagramSocket and bind to only one port, is there no advantage of using non-blocking mode with DatagramChannel and Selector?

It's been a while since I've used Java's DatagramSockets, Channels and the like, but I can still give you some help.
The UDP protocol does not establish a connection like TCP does. Rather, it just sends the data and forgets about it. If it is important to make sure that the data actually gets there, that is the client's responsibility. Thus, even if you are in blocking mode, your send operation will only block for as long as it takes to flush the buffer out. Since UDP does not know anything about the network, it will write it out at the earliest opportunity without checking the network speed or if it actually gets to where it is supposed to be going. Thus, to you, it appears as if the channel is actually immediately ready for more sending.

UDP doesn't block (It only blocks while it is transferring the data to the OS)
This means if at any point the next hop/switch/machine cannot buffer the UDP packet it drops it. This can be desirable behaviour in some situations. But it is something you need to be aware of.
UDP also doesn't guarantee to
delivery packets in the order they are sent.
not to break up large packets.
forward packets across switches. Often UDP forwarding between switches is turned off.
However UDP does support multicast so the same packet can be delivered to one or more hosts. The sender has no idea if anyone receives the packets however.
A tricky thing about UDP is it works most of the time, but fails badly sometimes in ways which are very difficult to reproduce. For this reason, you shouldn't assume reliability even if you do a few tests and it appears to work.

Non blocking UDP is mostly useful on the receiving side.
Packet sending can only be delayed due to local circumstances: local traffic shaping tools like "gaming network cards" that prioritize gaming traffic over other traffic sources, or overloaded network card (which is not likely to happen) can delay the sending of a packet. Once out of the system. Once the packet leaves the local interface, it's no longer the application's concern.

Related

Packet loss on Channel Handler Context on Netty

The problem: I'm having some packet loss internally. I mean internally because did capture all the traffic with wireshark and confirm the packet arrived at server, but did not arrive at channelRead0 method.
Scenario:
I built a SIP Server using Netty. The system uses UDP to communicate with other sip endpoints and works fine at low load.
My doubt is about design. Since SIP is a session protocol, on every packet received, I need to check what session it belongs to. The heavy workload surely is on the synchronized list that holds all sessions (I know need to optimize this on the future).
The whole system logic is inside channelRead0 method and this probably is the reason i'm losing some packets. The problem start to happens at around 500 pkt/sec.
There is no database connection (yet), the only I/O is writing log to a file which has almost no impact.
The question: How should I proper design this to handle 5000 pkts/sec? Maybe put all packets in a synchronized queue and handle them later?
Thanks for all help

2 way server-client UDP in java

I want to write a program in java that handles two way communication between server and client using udp.Most of the sources online specify only one way,that is from client to server.I want the server to be able to send messages to the client as well.
If you cannot use TCP, you can still achieve the same behaviour with UDP.
There are three aspects to consider.
First, that you mentioned: you want to communicate both ways. You can do that by running a sender and a listener thread on both the client and the server.
Second: UDP packets are not guaranteed to arrive. You have to implement an ACK logic in your application layer.
Third: UDP packets are not guaranteed to arrive in order. You have to implement some kind of ordering in your application layer.
UDP is a connection-less protocol on top of IP. This just means that there is no established connection you receive on the other end, you just receive packets of data. To answer back, you have to send a packet "back" to the client.
For this the client needs to be reachable however. This might or might not work through firewalls. Usually firewalls get "punched through" if the client initiates the conversation, but there is no guarantee.
Note also, that UDP packets may arrive out of order, duplicated or not at all. You have to be ready for all. If you send bigger (than MTU) packets, they may have a higher chance of not arriving due to splitting.

Java : what is the difference between serversocket and datagramsocket?

Basically I am new to server and client programming in java , I google all the necessary resources to learn from this particular topic however I did not understand the difference between them.
What I Understand so far for these two is that Both of them can Handle Client Request, but I need to further know the benefits of each Class and what particular scenario or specific case where when can I used it efficiently.
Like for instance , I have a Server Client Program which is a subset of team-viewer in which The client program must send Screenshot to the server in every millisecond while the server is going to publish it from another connected client. The code is working but I found out ServerSocket consumes so much Heap although it delivers successfully to the servers and client as well. I also read a blog (The link is missing) that is related to my problem suggested that DatagramSocket is the solution because it does not execute Handshakes.
I am really concern of the Benefits and Disadvantage of these classes.
A ServerSocket is for accepting incoming network connections on some stream protocol; e.g. TCP/IP.
A DatagramSocket is for sending and receiving datagrams on some connectionless datagram / message protocol; e.g. UDP/IP
Supplementary questions:
Basically what is a datagram
A datagram is bunch of information sent in a single logical packet. For example, a UDP packet.
and does this mean datagram = lightweight packets ?
It depends on your definition of lightweight!
UDP datagrams are sent as IP packets. If a UDP datagram is too big for an IP packet, it is broken into multiple IP packets by the sender and reassembled by the receiver.
and what does connectionless [mean],
It means that no logical connection exists between the 2 parties. If a component IP packet of a UDP datagram is lost, the UDP datagram is lost. The receiver never knows (at the application level). There is no reporting of data loss and no retrying in UDP. This is typical "connectionless" behavior.
does it mean Data might get lost during transmission?
Basically, yes. If you want reliable / lossless data transmissin the event that a datagram or on you should use ServerSocket and Socket; e.g. TCP/IP streams.
However, be aware that even with a (bare) TCP/IP stream, data delivery is not guaranteed:
If there is a network failure, or if either the sender or receiver has a failure, then a connection can be broken while data is in transit. That will result in data loss ... for that connection. (Sockets do not support reconnecting.) If the sender and/or receiver are still alive they will typically be informed that the connection has been broken, but they won't know why, or how much data was lost in transit.
It is possible for data to be corrupted in transit in ways that TCP/IP's error detection cannot spot. The receiver won't know this has happened.
Both of these issues can be addressed at the application protocol level; e.g. using message queues for the first and strong encryption and strong checksumming for the second.
Concerning your attempt to use ServerSocket.
The code is working but I found out ServerSocket consumes so much Heap although it delivers successfully to the servers and client as well.
You are doing something wrong. If you use the API appropriately the memory overheads should be insignificant.
My guess is that you are doing one or more of the following:
Opening a new connection for each client / server interaction
On the server side, creating a new thread for each connection
Not closing the connections.
I also read a blog (The link is missing) that is related to my problem suggested that DatagramSocket is the solution because it does not execute Handshakes.
Handshakes won't cause significant memory consumption.
TCP/IP stacks don't typically do handshakes by default anyway.
You say you have looked on google, but there are several pages on google that address your question directly. There are several that have the same title as your question. You have even indicated that you understand some of the difference between them by using the [tcp] and [udp] tags on your question.
The difference is one uses TCP communication protocol and one uses the UDP communication protocol. Perhaps your question is not one about Java but about how the internet, computer networking, and the communication protocols work?
TCP is a connection oriented reliable delivery protocol.
UDP is a connectionless unreliable delivery protocol.
What this means is you have to decide which is important, speed or reliability.
Is the data such that it must not be corrupted in transit? If it is then you must use TCP or a serversocket.
If the data must arrive by the fastest method, even at risk of getting lost, then you must use UDP or a datagramsocket.
If you need more explanation to understand you should take a course on computer networking.

no-blocking server and thread safe

I have a server which use one thread to receive UDP DatagramPackets from a remote data source; and a TCP ServerSocket to listen to remote clients request and spawn a dedicated thread for each client.
I want to transfer each DatagramPackets through the ServerSocket to the multiple clients. And now i encountered significant packet loss. Could anybody give some advice?
Thanks in advance。
Can it be just wrong choice of protocols in design ?
Something supposed to be reliable for multiple clients, as you use TCP. But reliability failed because of introduced dependency on UDP trough coupling (bridging/rebroadcasting) on server side.
UDP is more or less applicable for reliable applications, if you take into account that packets will be lost by design.
Solution 1: change protocol
Solution 2: if not possible to change protocol, then change expectations of user on client side about quality of service
Solution 3: add redundancy to UDP side to repeat requests, stock data ahead of time expecting future drops of quality, maintain large accumulated cache of data to feed clients no matter what.
It would be more to the point to get rid of UDP at the sender rather than try to shoehorn it somehow into your TCP design at the receiver, where it is already too late. The packets are lost between the sender and receiver, not in your receiver. Fixing the receiver code won't fix the actual problem.
Without knowing anything about your application design, I can make the following guesses:
Your UDP source is sending more packets than your receiver can handle causing packets to be dropped, because
your receiver is being blocked when passing off the packets to the TCP clients, because
your TCP clients are not picking up the packets quickly enough causing the buffer to fill up (which forces the server to block which causes it to miss UDP packets).

Why do I get UDP datagrams out of order even with processes running locally?

I'm developing a java interface between a streaming server and a flash client. I noticed that UDP datagrams can reach my interface out of order even if both processes are running locally.
Is that normal? I thought that as no datagram has to go through any router or any network device, then that should not be happening.
This would be operating system dependent. While you failed to specify an operating system it isn't important anyway. To remain portable you should always anticipate your datagram sockets receiving out of order data.
Actually there are no guarantees of ordering and reception about UDP packets, even if they are sent by localhost on localhost. Just because the specification of the protocol doesn't imply anything about it.
Since you can't make assumptions on them you should choose to use TCP or handle reordering by using a sequence number handled by your programs..
Although you are running localhost, expect UDP datagrams to be out of sequence in actual deployment.
If you need them in sequence, try TCP.
UDP isn't specified to preserve sequence, as the posters above have all said, but if there are no intermediate routers I would also suspect a bug in your code.

Categories