Do UDP packets arrive in entirety? - java

I know that TCP simulates a stream, so typically reads will start as soon as any data is received. Thats why I use this snippet to make sure it waits for the entire packet before acting on it
int packetSize = inputStream.readShort() ;
byte packetBuffer[] = new byte[packetSize];
int byteTrans = 0;
while ( byteTrans < packetSize )
{
inputStream.read( packetBuffer , byteTrans , 1 );
byteTrans++;
}//
For UDP however, will I still have to work around the same problem? I don't think so because TCP basically simulates a stream by breaking up your data into smaller packets and sending it, while in UDP you have more control over the whole process.
For reading UDP I use
byte[] packetByte = new byte[packetSize];
DatagramPacket packet = new DatagramPacket(packetByte, packetByte.length);
socket.receive(packet);
Do I have to implement a similar system for UDP?

When you send a datagram packet, it will be received in its entirety, yes (when it is actually received - continue reading the answer).
The behavior of UDP and TCP varies in much more than just that. UDP does not guarantee packets will be received in the same order they are sent (or even received at all) or that they are recevied exactly once. UDP is more of a "fire and forget", whereas TCP maintains a connection state.
In short, if the packet is received, you will get the whole packet. But it may not be received at all.

Related

Indefinite stale of TCP packet reception

Deplyment environment:
I have created a TCP server using JAVA over windows 10 OS. My TCP client program is written in VC++ and runs on windows 7 OS (I don't have any control over this part of the code, it is a black box to me).
My TCP server code is like this:
Socket s = ss.accept();
s.setReceiveBufferSize(2000);
s.setSendBufferSize(2000);
s.setTcpNoDelay(true);
s.setKeepAlive(true);
new TcpConnectionHandler(s,this.packetHandler);
Following is the TCP connection handler snippet:
InputStream incomingPacketBuffer = this.clientSocket.getInputStream();
OutputStream outgoingPacketBuffer = this.clientSocket.getOutputStream();
int bufferLen=0;
byte inBuffer[] = new byte[this.clientSocket.getReceiveBufferSize()];
byte outBuffer[] = new byte[this.clientSocket.getSendBufferSize()];
while(this.clientSocket.isConnected())
{
bufferLen = incomingPacketBuffer.read(inBuffer);
if(bufferLen>0)
{
outBuffer = (byte[]) this.packetHandlerModule.invoke(this.packetHandler,Arrays.copyOf(inBuffer, bufferLen));
}
if(outBuffer != null)
{
if(this.clientSocket.isConnected())
{
outgoingPacketBuffer.write(outBuffer);
outgoingPacketBuffer.flush();
}
}
}
this.clientSocket.close();
The communication is packet based and the protocol/parsing is handled by packetHandler.
Two more variant I've tried:
I have tried to close the socket as and when a reply is sent back to the client. That is, after receiving one packet of data, I reply to the client and close the connection.
I used inputStream.available before using the read method.
The problem I face:
Most of the time the TCP server replies to incoming packets within a second. If the server receives a packet after some idle time, the server doesn't reply to the packet. Sometimes even when there is active communication is going on, the reply is not being transmitted. Secondly, the isConnected function returns true even when the client socket closed the connection.
Debugging attempts:
I used teraterm to send packets and checked it. The behavior is same. As long as I send packets one after another, I don't have an issue. If one packet doesn't get a reply, then every packet sent after that does not get reply from the server.
When I press Ctrl+C in server console, all the packets sent from teraterm is processed by TCP server and reply is sent back. After this the server works properly for some duration.
I checked the packet flow with wireshark. When the replies are sent back normally, it is sent along with the ACK of client request (SYN, SYN+ACK, ACK, PSH, PSH+ACK, FYN, FYN+ACK, ACK). When the reply gets staled (may not be the right term, it is stuck in inputStream.available or inputStream.read), only ACK packet is sent by server (SYN, SYN+ACK, ACK, PSH, ACK).
I checked many forums and other threads in stackexchange, learned about Nagle's algorithm, applicaion must take care of packetization in TCP, TCP may receive 10+10 packets as 8+12 or 15+5 or any such manner. The server code takes care of packetization, setKeepAlive is set to true (there is no problem when a packet is sent from server).
Problem in short: "At times, TCP read call is getting blocked for a long duration even when there is incoming packets. When Ctrl+C is pressed, they are getting processed."
PS: I just started posting queries on stackexchange, so kindly let me know if there is any issues in the way of formulating the query.
PPS: Sorry for such a long post.
UPDATE
The comment from EJB helped me to identify the peer disconnect.
I made another setup with Ubuntu 16.04 as operating system for server. It has been 3 days, windows system had the issue occasionally. Ubuntu 16.04 never staled.
Some things to consider;
the TCP buffer sizes are usually 8K at least and I don't think you can skink them to 2000 bytes, or if you can, I don't think it's a good idea.
the size of the byte[] doesn't really matter over about 2K, you may as well pick a value.
you can't need to be creating a buffer more than once.
So in short I would try.
Socket s = ss.accept();
s.setTcpNoDelay(true);
s.setKeepAlive(true);
new TcpConnectionHandler(s,this.packetHandler);
and
try {
InputStream in = this.clientSocket.getInputStream();
OutputStream out = this.clientSocket.getOutputStream();
int bufferLen = 0;
byte[] buffer = new byte[2048];
while ((bufferLen = in.read(buffer)) > 0) {
out.write(buffer, 0, bufferLen); // not buffered so no need to flush
}
} finally {
this.clientSocket.close();
}
At times, TCP read call is getting blocked for a long duration even when there is incoming packets.
Would write a test Java client to see that this is not due to behaviour in Java.

Java UDP DatagramSocket stops receiving

I have a DatagramSocket where I'm receiving in a loop and it eventually just stops receiving packets. I send the server a hello message that establishes the connection. Then I start reciving packets as expected. Eventually it just stops receiving.
The sending server has verified that they are still sending packets to the same address via tcp dump but eventually this code hangs on the receive call.
Is there anything that would cause the socket to stop receiving?
String hello = "hello";
InetAddress IPAddress = InetAddress.getByName("serveraddress");
DatagramPacket outboundPacket = new DatagramPacket(hello.getBytes(),hello.getBytes().length, IPAddress, 54321 );
DatagramSocket registerSocket = new DatagramSocket(61646);
registerSocket.send(outboundPacket);
int count = 0;
while(!done){
count++;
byte[] inboundData = new byte[1368];
DatagramPacket inboundPacket = new DatagramPacket(inboundData,inboundData.length);
System.out.println(registerSocket.getPort());
System.out.println(registerSocket.getLocalPort());
//Eventually locks up here after hundreds of successful receives
registerSocket.receive(inboundPacket);
byte[] data = inboundPacket.getData();
String test = new String(data, "ISO-8859-1");
System.out.println(test+"---"+count);
}
registerSocket.close();
If you're behind NAT, the mapping will time out if there's no outbound traffic for too long. Make sure to send an outbound datagram every few minutes to keep the mapping active in the router.
Not clear from the question, whether you work with several DatagramSockets inside one process: This would be non-trivial. See Java: Receiving an UDP datagram packet with multiple DatagramSockets
Unless using multicast, a newly created datagram socket will inherit the process' receiving cursor and clamp
the existing one from receiving.

Is there any way to make the UDP packet loss be lower?

I am using UDP client to send about 20k requests per second with <1k data of every request. I need implement a UDP server via Java.
coding like following:
public void startRecieve() throws IOException {
udpSocket = new DatagramSocket(Constant.PORT);
byte[] buffer = new byte[Constant.SIZE];
udpPacket = new DatagramPacket(buffer, buffer.length);
int len;
while (true) {
udpSocket.receive(udpPacket);
len = udpPacket.getLength();
if (len > 0) {
// handing the data using other thread pool
}
}
}
Is there any way to make the UDP server packet loss be lower?
Thanks.
The packet loss is in the network, the only programming options are to
send less data e.g. compress it or
resend data on a loss so that less data is lost.
use TCP which handles packet loss for you.
use a library like Aeron which uses UDP and handles packet loss for you.
The best solution is almost always to fix the network to reduce the loss in the first place, but have a strategy which accepts some loss will happen.
Note: as UDP is a lossy protocol but TCP is not, many routers are optimised to drop UDP packets when the TCP load increases (as the router expects that any dropped TCP packet will just be sent again anyway)
This can mean that under load, you can see higher packet loss with UDP than TCP.

UDP packet separation

I'm creating an UDP service on andorid.
For clarity part of the code copied here:
byte[] message = new byte[1500];
DatagramPacket packet = new DatagramPacket(message, message.length);
socket = new DatagramSocket(Constants.LISTENING_PORT_NUMBER);
socket.receive(packet);
Problem is that I'm receiving UDP packets continuously.
For example packet #1 (250 byte long), packet #2 (182 byte long), packet #3 (403 byte long) etc...
I need to get and process the separate UDP packets which has variable length.
According to UDP protocol specification UDP packets has message boundaries defined.
However I found no solution in Java which can separate UDP packets.
In my current solution I have to define the packet length which I must read, but I don't know the packet length before I receive it.
Am I missing a point?
EDIT:
Thanks for both Tobias and shazin both are correct answers. Sad I cant's mark two answer as correct.
The socket.receive(packet); will receive until the UDP packet boundary, subsequent packages can be read with another socket.receive(packet);.
My problem should be that during the process of the first message the arrived further messages are not processed because of synchronous processing, now I'll pass the message processing to an async task and hopefully will be able to catch all arrived packets in time.
You cannot know the packet length before hand. You can define a maximum boundary bytes array based on the data you may be receiving. 2048 byte array is recommended.
byte[] message = new byte[2048];
Even if you have variable length in incoming message packets, you can use the following code to receive
byte[] message = new byte[2048];
DatagramPacket packet = new DatagramPacket(message, message.length);
socket = new DatagramSocket(Constants.LISTENING_PORT_NUMBER);
socket.receive(packet);
String data = new String(packet.getData(), 0, packet.getLength());
getData and getLength methods can be used to determine the size of the received packet.
Maybe I'm missing something here, but a DatagramPacket is one packet that is sent. It has the getLength() and getOffset() methods require to get to the data. I believe that there is also a getData() that returns the data sent.
Here you have a link That can help you further.

Multisocket - how to count how many datagram packets are coming up

So I am sending audio over UDP via multicast.
And the sender is sending a raw audio UDP packet every 10 ms. Unfortunately every now and then it misses a packet. So what I did was try to time the send/receive so that I can work out if I have missed one.
Here is what I currently have:
prevReceived = System.currentTimeMillis();
socket.receive(recv);
long messageReceived = System.currentTimeMillis();
if (dateDiff > 20) {
... Missed packet add the previous packet
The problem that I am having is that sometimes the java multisocket receive method is taking 70ms to receive a message. But when I check with Microsoft network monitor the sending is still sending messages.
So I was wondering if there is a way to look at if the multisocket object has any pending packets: socket.count() or something.
or does the datagrampacket received time from the socket time. eg something like recv.timestamp().
So far I have not found anything and cannot work out why it is taking 70ms to process the message when Microsoft network monitor is processing it every 10 ms.
Apache mina can count your datagram packets.
http://mina.apache.org/mina/userguide/ch2-basics/sample-udp-server.html
http://mina.apache.org/mina/userguide/user-guide-toc.html

Categories