I have a DatagramSocket where I'm receiving in a loop and it eventually just stops receiving packets. I send the server a hello message that establishes the connection. Then I start reciving packets as expected. Eventually it just stops receiving.
The sending server has verified that they are still sending packets to the same address via tcp dump but eventually this code hangs on the receive call.
Is there anything that would cause the socket to stop receiving?
String hello = "hello";
InetAddress IPAddress = InetAddress.getByName("serveraddress");
DatagramPacket outboundPacket = new DatagramPacket(hello.getBytes(),hello.getBytes().length, IPAddress, 54321 );
DatagramSocket registerSocket = new DatagramSocket(61646);
registerSocket.send(outboundPacket);
int count = 0;
while(!done){
count++;
byte[] inboundData = new byte[1368];
DatagramPacket inboundPacket = new DatagramPacket(inboundData,inboundData.length);
System.out.println(registerSocket.getPort());
System.out.println(registerSocket.getLocalPort());
//Eventually locks up here after hundreds of successful receives
registerSocket.receive(inboundPacket);
byte[] data = inboundPacket.getData();
String test = new String(data, "ISO-8859-1");
System.out.println(test+"---"+count);
}
registerSocket.close();
If you're behind NAT, the mapping will time out if there's no outbound traffic for too long. Make sure to send an outbound datagram every few minutes to keep the mapping active in the router.
Not clear from the question, whether you work with several DatagramSockets inside one process: This would be non-trivial. See Java: Receiving an UDP datagram packet with multiple DatagramSockets
Unless using multicast, a newly created datagram socket will inherit the process' receiving cursor and clamp
the existing one from receiving.
Related
I'm trying to create a simple Java UDP Client-Server program. The server/host waits for an appropriate request from the client before doing something. Everything seems to be working fine but the reply from the host to the client is always truncated for some reason.
I assume it has something to do with the the length of "Bob" as all responses from the host is shortened to 3. I tried messing around with setLength() and the buffer size but can't seem to figure it out...
Client.java:
public class Client {
public static void main(String[] args) throws Exception {
//Use Java's built-in DatagramSocket and DatagramPacket to implement UDP
DatagramSocket socket = new DatagramSocket(); //Create socket object to send data
socket.setSoTimeout(5000); //Throw an exception if no data received within 5000ms
//Create scanner to grab user input
Scanner input = new Scanner(System.in);
System.out.print("Please enter a message to send to Alice: ");
String bobInput = input.nextLine(); //Storing user input
input.close(); //Close scanner to prevent memory leaks
byte[] buffer = new byte[65535];
//Create packet containing message
DatagramPacket packet = new DatagramPacket(buffer, buffer.length, InetAddress.getByName("localhost"), 1500);
packet.setData(bobInput.getBytes());
socket.send(packet); //Send message to host
socket.receive(packet);
System.out.println("Received from Alice: " + new String(packet.getData(),0,packet.getLength()));
socket.close();
}
}
Host.java
public class Host {
public static void main(String[] args) throws Exception {
//Use Java's built-in DatagramSocket and DatagramPacket to implement UDP
DatagramSocket socket = new DatagramSocket(1500); //Create a socket to listen at port 1500
byte[] buf = new byte[65535]; //Byte array to wrap
System.out.println("Parameters successfully read. Listening on port 1500...");
//While-loop to keep host running until terminated
while (true) {
DatagramPacket packet = new DatagramPacket(buf, buf.length); //Create a packet to receive message
socket.receive(packet); //Receive the packet
//If Alice receives a packet with the message "Bob"
if(new String(packet.getData(),0,packet.getLength()).equals("Bob")) {
System.out.println("Bob has sent a connection request.");
String test = "Hello Bob!";
packet.setData(test.getBytes());
System.out.println("Text sent: " + new String(packet.getData(),0,packet.getLength()));
}
}
}
}
Console output for client:
Please enter a message to send to Alice: Bob
Received from Alice: Hel
Console output for host:
Parameters successfully read. Listening on port 1500...
Bob has sent a connection request.
Text sent: Hello Bob!
It's my first time working with UDP so I apologize if its some basic mistake that I made.
In the client, you create a packet of length 3 (because the content is 'Bob'). You use the same packet for a receive, which per the doc will then truncate the received data to the packet length.
The DatagramPacket class does not appear to distinguish 'length of underlying buffer' from 'length of data in buffer'.
Best to use separate packet structures; the send packet just needs to contain 'Bob'; the receive packet needs to be large enough for the maximum expected response.
(Debugging hint: figuring this out starts with noticing that it's unlikely to be mere coincidence that the lengths of the transmitted message and of the truncated received message are identical).
Typically, in cases like this, you should create a custom protocol, either textual or binary, in which you specifies the length of the message to be read; it is not a TCP, so you can only wait until the entire datagram has been sent or received and after that parse the contents and extract the message.
This requires defining your own protocol to be used on top of UDP; there are two macro types of protocols: text-type and binary-type.
In text-type protocols you often make use of integers to put as prefixes that indicate the actual length of the message to be read. In fact if you know that the length of your messages, for example, stands at 32bit you can read the first four bytes of the datagram and know how many more you will have to read to get your string. The more complex text protocols make use of delimiting characters to specify a format that defines options and data; of this family you will certainly be familiar with HTTP.
In binary-type protocols it is a little more complex, you can have flags, various lengths referring to different fields that may or may not be optional.
In short in this case you have to define for yourself a frame type to use and interpret. If you have to deal with fragmentation, options and variable lengths, I would recommend that you take a look at this type of format.
Keep also in mind that with UDP you should in real projects also handle the retransmission of lost packets.
Anyway, it's probably a typo, but your code is definitely missing a send() that actually sends the response data.
So from your code we can't see what you actually send.
In conclusion, to answer you, from what can be inferred from the part of the code given and the output provided, you make use of setData() to send "Bob." The documentation for that method is as follows:
Set the data buffer for this packet. With the offset of this
DatagramPacket set to 0, and the length set to the length of buf.
So, no matter what buffer you initialized the datagram with it will now be reduced to a maximum length of three. In fact, when you receive the response you read the message with getLength() which will always return three.
Deplyment environment:
I have created a TCP server using JAVA over windows 10 OS. My TCP client program is written in VC++ and runs on windows 7 OS (I don't have any control over this part of the code, it is a black box to me).
My TCP server code is like this:
Socket s = ss.accept();
s.setReceiveBufferSize(2000);
s.setSendBufferSize(2000);
s.setTcpNoDelay(true);
s.setKeepAlive(true);
new TcpConnectionHandler(s,this.packetHandler);
Following is the TCP connection handler snippet:
InputStream incomingPacketBuffer = this.clientSocket.getInputStream();
OutputStream outgoingPacketBuffer = this.clientSocket.getOutputStream();
int bufferLen=0;
byte inBuffer[] = new byte[this.clientSocket.getReceiveBufferSize()];
byte outBuffer[] = new byte[this.clientSocket.getSendBufferSize()];
while(this.clientSocket.isConnected())
{
bufferLen = incomingPacketBuffer.read(inBuffer);
if(bufferLen>0)
{
outBuffer = (byte[]) this.packetHandlerModule.invoke(this.packetHandler,Arrays.copyOf(inBuffer, bufferLen));
}
if(outBuffer != null)
{
if(this.clientSocket.isConnected())
{
outgoingPacketBuffer.write(outBuffer);
outgoingPacketBuffer.flush();
}
}
}
this.clientSocket.close();
The communication is packet based and the protocol/parsing is handled by packetHandler.
Two more variant I've tried:
I have tried to close the socket as and when a reply is sent back to the client. That is, after receiving one packet of data, I reply to the client and close the connection.
I used inputStream.available before using the read method.
The problem I face:
Most of the time the TCP server replies to incoming packets within a second. If the server receives a packet after some idle time, the server doesn't reply to the packet. Sometimes even when there is active communication is going on, the reply is not being transmitted. Secondly, the isConnected function returns true even when the client socket closed the connection.
Debugging attempts:
I used teraterm to send packets and checked it. The behavior is same. As long as I send packets one after another, I don't have an issue. If one packet doesn't get a reply, then every packet sent after that does not get reply from the server.
When I press Ctrl+C in server console, all the packets sent from teraterm is processed by TCP server and reply is sent back. After this the server works properly for some duration.
I checked the packet flow with wireshark. When the replies are sent back normally, it is sent along with the ACK of client request (SYN, SYN+ACK, ACK, PSH, PSH+ACK, FYN, FYN+ACK, ACK). When the reply gets staled (may not be the right term, it is stuck in inputStream.available or inputStream.read), only ACK packet is sent by server (SYN, SYN+ACK, ACK, PSH, ACK).
I checked many forums and other threads in stackexchange, learned about Nagle's algorithm, applicaion must take care of packetization in TCP, TCP may receive 10+10 packets as 8+12 or 15+5 or any such manner. The server code takes care of packetization, setKeepAlive is set to true (there is no problem when a packet is sent from server).
Problem in short: "At times, TCP read call is getting blocked for a long duration even when there is incoming packets. When Ctrl+C is pressed, they are getting processed."
PS: I just started posting queries on stackexchange, so kindly let me know if there is any issues in the way of formulating the query.
PPS: Sorry for such a long post.
UPDATE
The comment from EJB helped me to identify the peer disconnect.
I made another setup with Ubuntu 16.04 as operating system for server. It has been 3 days, windows system had the issue occasionally. Ubuntu 16.04 never staled.
Some things to consider;
the TCP buffer sizes are usually 8K at least and I don't think you can skink them to 2000 bytes, or if you can, I don't think it's a good idea.
the size of the byte[] doesn't really matter over about 2K, you may as well pick a value.
you can't need to be creating a buffer more than once.
So in short I would try.
Socket s = ss.accept();
s.setTcpNoDelay(true);
s.setKeepAlive(true);
new TcpConnectionHandler(s,this.packetHandler);
and
try {
InputStream in = this.clientSocket.getInputStream();
OutputStream out = this.clientSocket.getOutputStream();
int bufferLen = 0;
byte[] buffer = new byte[2048];
while ((bufferLen = in.read(buffer)) > 0) {
out.write(buffer, 0, bufferLen); // not buffered so no need to flush
}
} finally {
this.clientSocket.close();
}
At times, TCP read call is getting blocked for a long duration even when there is incoming packets.
Would write a test Java client to see that this is not due to behaviour in Java.
I know that TCP simulates a stream, so typically reads will start as soon as any data is received. Thats why I use this snippet to make sure it waits for the entire packet before acting on it
int packetSize = inputStream.readShort() ;
byte packetBuffer[] = new byte[packetSize];
int byteTrans = 0;
while ( byteTrans < packetSize )
{
inputStream.read( packetBuffer , byteTrans , 1 );
byteTrans++;
}//
For UDP however, will I still have to work around the same problem? I don't think so because TCP basically simulates a stream by breaking up your data into smaller packets and sending it, while in UDP you have more control over the whole process.
For reading UDP I use
byte[] packetByte = new byte[packetSize];
DatagramPacket packet = new DatagramPacket(packetByte, packetByte.length);
socket.receive(packet);
Do I have to implement a similar system for UDP?
When you send a datagram packet, it will be received in its entirety, yes (when it is actually received - continue reading the answer).
The behavior of UDP and TCP varies in much more than just that. UDP does not guarantee packets will be received in the same order they are sent (or even received at all) or that they are recevied exactly once. UDP is more of a "fire and forget", whereas TCP maintains a connection state.
In short, if the packet is received, you will get the whole packet. But it may not be received at all.
I'm creating an UDP service on andorid.
For clarity part of the code copied here:
byte[] message = new byte[1500];
DatagramPacket packet = new DatagramPacket(message, message.length);
socket = new DatagramSocket(Constants.LISTENING_PORT_NUMBER);
socket.receive(packet);
Problem is that I'm receiving UDP packets continuously.
For example packet #1 (250 byte long), packet #2 (182 byte long), packet #3 (403 byte long) etc...
I need to get and process the separate UDP packets which has variable length.
According to UDP protocol specification UDP packets has message boundaries defined.
However I found no solution in Java which can separate UDP packets.
In my current solution I have to define the packet length which I must read, but I don't know the packet length before I receive it.
Am I missing a point?
EDIT:
Thanks for both Tobias and shazin both are correct answers. Sad I cant's mark two answer as correct.
The socket.receive(packet); will receive until the UDP packet boundary, subsequent packages can be read with another socket.receive(packet);.
My problem should be that during the process of the first message the arrived further messages are not processed because of synchronous processing, now I'll pass the message processing to an async task and hopefully will be able to catch all arrived packets in time.
You cannot know the packet length before hand. You can define a maximum boundary bytes array based on the data you may be receiving. 2048 byte array is recommended.
byte[] message = new byte[2048];
Even if you have variable length in incoming message packets, you can use the following code to receive
byte[] message = new byte[2048];
DatagramPacket packet = new DatagramPacket(message, message.length);
socket = new DatagramSocket(Constants.LISTENING_PORT_NUMBER);
socket.receive(packet);
String data = new String(packet.getData(), 0, packet.getLength());
getData and getLength methods can be used to determine the size of the received packet.
Maybe I'm missing something here, but a DatagramPacket is one packet that is sent. It has the getLength() and getOffset() methods require to get to the data. I believe that there is also a getData() that returns the data sent.
Here you have a link That can help you further.
I am running a simple UDP Java Server, which collects IP and Port of Client when connected, store information in Database.
Client is still listening to server. Server stops.
Later, server want to reuse the database information, to reach the client; and as the client is still listening to same port of server, I guess client should receive the communication.
I am new to UDP, please let me know the way to achieve the above objective. Thank you.
Let me rephrase the question as I did try the ways suggested by members of Stackoverflow.
The client can be contacted by server within a short timespan, but after say 10 mins the client is unreachable; although it appears client is ready to listen to server for all the time but the server cannot reach client even if tried for several time. What could be cause for this? please let me know the way to deal with this
I think you are a bit confused regarding the UDP protocol (RFC 768). I think it would be helpful to review the UDP protocol to understand the differences between UDP and TCP.
Regarding your specific problem, it is difficult to know what is your exact problem without any type of code. There is a Client-Server in UDP example available in the sun tutorials.
UDP is sessionless, so I guess it should indeed work.
It would go something like that:
// Client:
socket = new DatagramSocket();
DatagramPacket req = new DatagramPacket(data, data.length, serverAddress, serverPort);
socket.send(req);
DatagramPacket resp = new DatagramPacket(new byte[MAX_RESP_SIZE], MAX_RESP_SIZE);
socket.receive(resp);
// Server:
DatagramSocket socket = new DatagramSocket(port);
while (!stopped) {
DatagramPacket req = new DatagramPacket(new byte[MAX_REQ_SIZE], MAX_REQ_SIZE);
socket.receive(req);
saveToDatabase(req.getAddress(), req.getPort());
}
socket.close();
// Then later:
DatagramSocket socket = new DatagramSocket(port);
// retrieve clientAddr and clientPort from database
DatagramPacket resp = new DatagramPacket(data, data.length, clientAddress, clientPort);
socket.send(resp);
socket.close();