Java: Is corrupt data possible with UDP in Java - java

I am writing a network test, which sends UDP packets with different sizes (up to 50k bytes). I want to measure ping and package loss. Do I also have to check the data transferred, or is the package dropped if it contains corrupt data?

First of all, the TCP/IP protocol suite (which includes UDP) is implemented in all modern operating systems. The implementation is often referred to as the network stack. The Java virtual machine itself uses regular user-level sockets for its networking, so it uses the operating system's network stack. All you can do from Java is set socket options, specify the source and destination, and provide the data to be sent. So I wouldn't worry about creating malformed packets from Java. The segment (layer 4), packet (layer 3) and frame (layer 2) are all created for your application by the OS. Nothing you can do with regular sockets can change that unless you're using raw sockets.
Second, there are multiple error detection codes in each TCP/IP packet. UDP and TCP have a 16 bit checksum code that verifies the header, the packet's payload, and several IP header field. IP packets themselves have a header checksum, and both Ethernet (802.3) and Wifi (802.11) have their own error detection mechanisms at the frame level. The default behavior I've seen in both networking equipment and endpoint operating systems when handling erroneous packets, is dropping them. So there's really very little chance of getting errors in your user-level socket.
Edit:
One point worth mentioning about UDP is that unlike the stream-oriented TCP, UDP sockets operate on a per-packet basis. So whatever you send() using a UDP socket, will be sent as a single UDP packet. I'd suggest making sure you don't put anything too big in there at any single point. Don't try to send a big 4KiB chunk as a single packet, because that will cause IP-level fragmentation. Do your best to make sure the packets being sent are at least 20 bytes smaller than the smallest MTU along the path from your machine to your destination (20 bytes is because the IP header is normally 20 bytes long). When in doubt, limit the data sent per packet to about 1000-1200 bytes, well below the common ~1400-1500 MTU limit.

Related

How to disable fragmentation: Tomcat8 SSL return 2 reassembled SSL segments

We have an application runned on Tomcat8, in the https get response, there're 2 reassembled SSL segments.
Is there any way to turn off it and send ONE TCP packet?
enter image description here
I'm afraid the answer is 'probably not', but let's first determine that your network is doing the right thing. The maximum segment size in a response is limited by the MSS (maximum segment size) value sent by your client in the TCP handshake.
Since you can see the reassembly going on I will assume that you've got Wireshark or tcpdump. Look in the SYN packet sent by your client at the beginning of the conversation. Find the TCP options and within that the MSS value. A normal value for most ethernet hardware will be 1460 bytes.
One way to increase the MSS is to enable jumbo frames if they're supported by your local network hardware.
Also note that in a complex environment 'smart' routers and firewalls are capable of intercepting and modifying (i.e. reducing) MSS values to cope with their own limitations. In environments like these you really have to have wireshark on both ends of the connection to see the whole picture.
In Tomcat6, there're only one packet, size about 2700 bytes.
And in Tomcat8, it is 2 reassembled SSL segment. One is 290 containing the header, another is the left with xml body.
By changing the port to "org.apache.coyote.http11.Http11Nio2Protocol", it works well. I also tried "org.apache.coyote.http11.Http11Nio1Protocol", it will send two packages, and for "org.apache.coyote.http11.Http11Protocol", it will send only one packet.

Java : what is the difference between serversocket and datagramsocket?

Basically I am new to server and client programming in java , I google all the necessary resources to learn from this particular topic however I did not understand the difference between them.
What I Understand so far for these two is that Both of them can Handle Client Request, but I need to further know the benefits of each Class and what particular scenario or specific case where when can I used it efficiently.
Like for instance , I have a Server Client Program which is a subset of team-viewer in which The client program must send Screenshot to the server in every millisecond while the server is going to publish it from another connected client. The code is working but I found out ServerSocket consumes so much Heap although it delivers successfully to the servers and client as well. I also read a blog (The link is missing) that is related to my problem suggested that DatagramSocket is the solution because it does not execute Handshakes.
I am really concern of the Benefits and Disadvantage of these classes.
A ServerSocket is for accepting incoming network connections on some stream protocol; e.g. TCP/IP.
A DatagramSocket is for sending and receiving datagrams on some connectionless datagram / message protocol; e.g. UDP/IP
Supplementary questions:
Basically what is a datagram
A datagram is bunch of information sent in a single logical packet. For example, a UDP packet.
and does this mean datagram = lightweight packets ?
It depends on your definition of lightweight!
UDP datagrams are sent as IP packets. If a UDP datagram is too big for an IP packet, it is broken into multiple IP packets by the sender and reassembled by the receiver.
and what does connectionless [mean],
It means that no logical connection exists between the 2 parties. If a component IP packet of a UDP datagram is lost, the UDP datagram is lost. The receiver never knows (at the application level). There is no reporting of data loss and no retrying in UDP. This is typical "connectionless" behavior.
does it mean Data might get lost during transmission?
Basically, yes. If you want reliable / lossless data transmissin the event that a datagram or on you should use ServerSocket and Socket; e.g. TCP/IP streams.
However, be aware that even with a (bare) TCP/IP stream, data delivery is not guaranteed:
If there is a network failure, or if either the sender or receiver has a failure, then a connection can be broken while data is in transit. That will result in data loss ... for that connection. (Sockets do not support reconnecting.) If the sender and/or receiver are still alive they will typically be informed that the connection has been broken, but they won't know why, or how much data was lost in transit.
It is possible for data to be corrupted in transit in ways that TCP/IP's error detection cannot spot. The receiver won't know this has happened.
Both of these issues can be addressed at the application protocol level; e.g. using message queues for the first and strong encryption and strong checksumming for the second.
Concerning your attempt to use ServerSocket.
The code is working but I found out ServerSocket consumes so much Heap although it delivers successfully to the servers and client as well.
You are doing something wrong. If you use the API appropriately the memory overheads should be insignificant.
My guess is that you are doing one or more of the following:
Opening a new connection for each client / server interaction
On the server side, creating a new thread for each connection
Not closing the connections.
I also read a blog (The link is missing) that is related to my problem suggested that DatagramSocket is the solution because it does not execute Handshakes.
Handshakes won't cause significant memory consumption.
TCP/IP stacks don't typically do handshakes by default anyway.
You say you have looked on google, but there are several pages on google that address your question directly. There are several that have the same title as your question. You have even indicated that you understand some of the difference between them by using the [tcp] and [udp] tags on your question.
The difference is one uses TCP communication protocol and one uses the UDP communication protocol. Perhaps your question is not one about Java but about how the internet, computer networking, and the communication protocols work?
TCP is a connection oriented reliable delivery protocol.
UDP is a connectionless unreliable delivery protocol.
What this means is you have to decide which is important, speed or reliability.
Is the data such that it must not be corrupted in transit? If it is then you must use TCP or a serversocket.
If the data must arrive by the fastest method, even at risk of getting lost, then you must use UDP or a datagramsocket.
If you need more explanation to understand you should take a course on computer networking.

Java UDP packet sending and receiving issue

I'm working on a project for class using Java and UDP senders and receivers. The premise of the problem is to read in a text file, store the contents within a packet, send the packet, receive the packet, and read out the file on screen and create a new text document on the receiving computer of an identical text file.
I have all of that working. When I test with a local host it appears to work 100% of the time. When I send it from my laptop to my PC it appears to work 100% of the time. However, when I send it from my PC to my laptop it does not work.
I have several System.out debug statements to verify some of the information I send. I know that the text file should take 7 packets. However, whenever I send it from my PC to my laptop it says that I am sending 46 packets.
My initial thought is that maybe the packets are being sent out of order. The first packet I am sending indicates how many packets the receiver should be expecting to receive. I thought maybe for some reason the "46" could indicate a capital "F" so I removed all the capital "F" and it still says I'm sending 46 packets.
I thought maybe I was sending too much information at once so I used Thread.sleep() to give my receiver time to keep up -- that did not work either.
Finally, I read through the Oracle Documentation and some posts online and found out that UDP is unreliable. So, I'm assuming it could potentially be that. However, I want to just verify that that could be the issue.
Or if anyone has a better idea as to what could be causing the problem that would be awesome as well!
Thanks for your help :)
Yes, UDP is an unreliable protocol. UDP messages may be lost, and neither the sender or receiver will get any notification.
The 7 packets becomes 46 packets is typically due to fragmentation at the IP packet level. The protocol level beneath IP (e.g. physical ethernet packets, wifi packets etc) typically has a hard limit on the largest IP packet that can be sent "in one go", and similar limits are imposed by network routers, gateways and so on. If an IP packet that is larger than the limit is sent, two things can happen:
The IP packet may turned into "fragments" that need to be reassembled by the receiver.
The intermediate equipment can sent an ICMP message back to the sender telling it to send smaller IP packets.
In either case, the net result is that the number of IP packets needed to send a given sized UDP message can vary, depending on the network.
Of course, if a UDP message needs to be sent as multiple IP packets, AND there is local congestion in the network, that will increase the likelihood of packets, and hence messages failing.
But the bottom line is that UDP is not reliable. If you want reliability, the simple solution is to use TCP instead.

Only 16 UDP 512 byte packets being received by server when I sent 7mb total

I break a 7mb file into 512b chunks and I send it with udp to a server. About 14000 packets get sent by the client but on the server side socket.receive(packet) blocks after receiving only 16 packets.
Any ideas what's going on here?
UDP is defined as an unreliable protocol. Packets may be lost, and without the sender being informed. They may also arrive out of order and even duplicates may arrive.
UDP is suitable for purposes where error checking and correction is either unnecessary, or is performed by the application itself.
If you want a reliable protocol, start using TCP.
In contrast to TCP, UDP does neither ensure packet order nor actual delivery (no flow control as in TCP). See this question: ensuring packet order in UDP

Is it possible to force a packet to be fragmented in Java?

I have a bug which is caused by fragmented packets. I would like test this bug by creating a fragmented packet in the test and sending it to the software containing the bug.
How would I go about doing this?
Any guidance or alternative approaches appreciated, thanks.
If you're talking about TCP fragments, those should be hidden (reassembled) by the OS upon receipt, unless you use a low level packet capture facility e.g. Ethereal
To force sending of TCP fragments, decrease the maximum packet/segment size on some router, and/or configure the sending OS to use a larger MSS than will fit.
In windows, you can change the MTU size in the registry. Don't know about other platforms.
Do the frags need to be part of a TCP stream? Or would any IP frag do? They are easily generated for UDP simply by making the datagram bigger than the MTU. Usually, 2k will do fine but if your LAN has jumbo frames enabled 10k or 20k will still produce frags.
It's easy to fragment packet with UDP. If you send a UDP datagram larger than the MTU, it will be fragmented. It's not so easy with TCP, OS will not knowingly fragment packet. Changing host or router MTU doesn't help either because most OSes will do MTU discovery and find the smallest one.
You should use some thing like Packet Generator to simulate fragmented TCP packets.

Categories