2 way server-client UDP in java - java

I want to write a program in java that handles two way communication between server and client using udp.Most of the sources online specify only one way,that is from client to server.I want the server to be able to send messages to the client as well.

If you cannot use TCP, you can still achieve the same behaviour with UDP.
There are three aspects to consider.
First, that you mentioned: you want to communicate both ways. You can do that by running a sender and a listener thread on both the client and the server.
Second: UDP packets are not guaranteed to arrive. You have to implement an ACK logic in your application layer.
Third: UDP packets are not guaranteed to arrive in order. You have to implement some kind of ordering in your application layer.

UDP is a connection-less protocol on top of IP. This just means that there is no established connection you receive on the other end, you just receive packets of data. To answer back, you have to send a packet "back" to the client.
For this the client needs to be reachable however. This might or might not work through firewalls. Usually firewalls get "punched through" if the client initiates the conversation, but there is no guarantee.
Note also, that UDP packets may arrive out of order, duplicated or not at all. You have to be ready for all. If you send bigger (than MTU) packets, they may have a higher chance of not arriving due to splitting.

Related

Can websocket messages get lost or not?

I'm currently developing a Java WebSocket Client Application and I have to make sure that every message from the server is received by the client. Is it possible that I lose some messages (once they are sent from the server) due to a connection interruption? WebSocket is based on TCP so this shouldn't happen right?
It can happen. TCP guarantees the order of packets, but it does not mean that all packets sent from a server reach a client even when an unrecoverable trouble happens in an underlying network. Imagine someone pulls out your LAN cable or switches off your WiFi access point at the worst timing while your application is communicating with your server. TCP does not overcome such a trouble.
To ensure that every WebSocket message sent from your server reaches your client, you have to implement some kind of SYN/ACK in the application layer.
TCP is a guaranteed protocol - packets will be received in the correct order by the higher application levels at the far end (this is as opposed to UDP which is a send and hope protocol).
Generally speaking TCP should be be used for connections where all the data must arrive correctly at the far end. UDP is used where a missing packet can be dropped without significant issue (e.g. streaming services, NTP updates)
In my game, to counter missed web socket messages, I added an int/long ID for each message. When the client detects that something is wrong in the sequence of IDs it receives, the client will request for new data from the server to be able to recover properly.
TCP has something called Control Flow- which means it provides reliable, ordered, and error-checked delivery.
In other words TCP is a protocol that checks constantly whether the data arrived.
This protocol has different mechanisms to ensure that.
You can see the difference between TCP and UDP (which has no control flow) in the link below.
Difference between tcp and udp

Java : what is the difference between serversocket and datagramsocket?

Basically I am new to server and client programming in java , I google all the necessary resources to learn from this particular topic however I did not understand the difference between them.
What I Understand so far for these two is that Both of them can Handle Client Request, but I need to further know the benefits of each Class and what particular scenario or specific case where when can I used it efficiently.
Like for instance , I have a Server Client Program which is a subset of team-viewer in which The client program must send Screenshot to the server in every millisecond while the server is going to publish it from another connected client. The code is working but I found out ServerSocket consumes so much Heap although it delivers successfully to the servers and client as well. I also read a blog (The link is missing) that is related to my problem suggested that DatagramSocket is the solution because it does not execute Handshakes.
I am really concern of the Benefits and Disadvantage of these classes.
A ServerSocket is for accepting incoming network connections on some stream protocol; e.g. TCP/IP.
A DatagramSocket is for sending and receiving datagrams on some connectionless datagram / message protocol; e.g. UDP/IP
Supplementary questions:
Basically what is a datagram
A datagram is bunch of information sent in a single logical packet. For example, a UDP packet.
and does this mean datagram = lightweight packets ?
It depends on your definition of lightweight!
UDP datagrams are sent as IP packets. If a UDP datagram is too big for an IP packet, it is broken into multiple IP packets by the sender and reassembled by the receiver.
and what does connectionless [mean],
It means that no logical connection exists between the 2 parties. If a component IP packet of a UDP datagram is lost, the UDP datagram is lost. The receiver never knows (at the application level). There is no reporting of data loss and no retrying in UDP. This is typical "connectionless" behavior.
does it mean Data might get lost during transmission?
Basically, yes. If you want reliable / lossless data transmissin the event that a datagram or on you should use ServerSocket and Socket; e.g. TCP/IP streams.
However, be aware that even with a (bare) TCP/IP stream, data delivery is not guaranteed:
If there is a network failure, or if either the sender or receiver has a failure, then a connection can be broken while data is in transit. That will result in data loss ... for that connection. (Sockets do not support reconnecting.) If the sender and/or receiver are still alive they will typically be informed that the connection has been broken, but they won't know why, or how much data was lost in transit.
It is possible for data to be corrupted in transit in ways that TCP/IP's error detection cannot spot. The receiver won't know this has happened.
Both of these issues can be addressed at the application protocol level; e.g. using message queues for the first and strong encryption and strong checksumming for the second.
Concerning your attempt to use ServerSocket.
The code is working but I found out ServerSocket consumes so much Heap although it delivers successfully to the servers and client as well.
You are doing something wrong. If you use the API appropriately the memory overheads should be insignificant.
My guess is that you are doing one or more of the following:
Opening a new connection for each client / server interaction
On the server side, creating a new thread for each connection
Not closing the connections.
I also read a blog (The link is missing) that is related to my problem suggested that DatagramSocket is the solution because it does not execute Handshakes.
Handshakes won't cause significant memory consumption.
TCP/IP stacks don't typically do handshakes by default anyway.
You say you have looked on google, but there are several pages on google that address your question directly. There are several that have the same title as your question. You have even indicated that you understand some of the difference between them by using the [tcp] and [udp] tags on your question.
The difference is one uses TCP communication protocol and one uses the UDP communication protocol. Perhaps your question is not one about Java but about how the internet, computer networking, and the communication protocols work?
TCP is a connection oriented reliable delivery protocol.
UDP is a connectionless unreliable delivery protocol.
What this means is you have to decide which is important, speed or reliability.
Is the data such that it must not be corrupted in transit? If it is then you must use TCP or a serversocket.
If the data must arrive by the fastest method, even at risk of getting lost, then you must use UDP or a datagramsocket.
If you need more explanation to understand you should take a course on computer networking.

What is better for instant messenger TCP or UDP?

I need to implement client/server instant messenger using pure sockets in Java lang.
The server should serve large number of clients and I need to decide which sockets should I use - TCP or UDP.
Thanks, Costa.
TCP
Reason:
TCP: "There is absolute guarantee that the data transferred remains intact and arrives in the same order in which it was sent."
UDP: "There is no guarantee that the messages or packets sent would reach at all."
Learn more at: http://www.diffen.com/difference/TCP_vs_UDP
Would you want your chat message possibly lost?
Edit: I missed the part about "large chat program". I think because of the nature of the chat program it needs to be a TCP server, I cannot imagine the actual text content sent by users over a UDP protocol.
The max limit for TCP servers is 65536 connections at the same time. If you really need to go past that number you could create a dispatcher server that would send incoming connects to the appropriate server depending on current server loads.
You could use both. Use TCP for exchanging the actual messages, (so no data lost and streaming large messages, (eg. containing jpegs etc), is possible. Use UDP only for sending short 'connectNow' messages to clients for which there are messages queued. The clients could have states like (NotLoggedIn, TCPconnected, TCPdisconnected, LoggedOut) with various timeouts to control the state transitions as well as the normal message-exchange events. The UDP 'connectNow' message would instruct clients in 'TCPdisconnected' to connect and so move to 'TCPconnected', where they would stay, exchanging messages, until some inactivity timer instructs the client to disconnect for now. This would, of course, be unreliable and so you may wish to repeat the 'connectNow' message every X seconds for N times until the client connects. The client should, in any case, attempt a poll every X minutes, just in case...
It depends whether the user needs to know if the messages have been delivered to the server. UDP packets have no inherent acknowledgement. If the client sends an IM message to the server and it gets lost in transit, neither the client or the server will know about it.
(The short answer is "use TCP" ... but it is worth thinking through the design implications for yourself.)
TCP would give you reliability, which is certainly desirable when during instant messaging -- you would not want messages to be dropped during converstation.
However, if you intend on using group messaging, then you might end up using mulitcast. For such cases, UDP would be the right chioce since UDP can handle point to multipoint. Using TCP for multicast applications would be hard since now the sender would have to keep track of retransmissions/sending rate for multiple receivers. One alternative could be to use TCP for point-to-point chat and use UDP for group messaging.

no-blocking server and thread safe

I have a server which use one thread to receive UDP DatagramPackets from a remote data source; and a TCP ServerSocket to listen to remote clients request and spawn a dedicated thread for each client.
I want to transfer each DatagramPackets through the ServerSocket to the multiple clients. And now i encountered significant packet loss. Could anybody give some advice?
Thanks in advance。
Can it be just wrong choice of protocols in design ?
Something supposed to be reliable for multiple clients, as you use TCP. But reliability failed because of introduced dependency on UDP trough coupling (bridging/rebroadcasting) on server side.
UDP is more or less applicable for reliable applications, if you take into account that packets will be lost by design.
Solution 1: change protocol
Solution 2: if not possible to change protocol, then change expectations of user on client side about quality of service
Solution 3: add redundancy to UDP side to repeat requests, stock data ahead of time expecting future drops of quality, maintain large accumulated cache of data to feed clients no matter what.
It would be more to the point to get rid of UDP at the sender rather than try to shoehorn it somehow into your TCP design at the receiver, where it is already too late. The packets are lost between the sender and receiver, not in your receiver. Fixing the receiver code won't fix the actual problem.
Without knowing anything about your application design, I can make the following guesses:
Your UDP source is sending more packets than your receiver can handle causing packets to be dropped, because
your receiver is being blocked when passing off the packets to the TCP clients, because
your TCP clients are not picking up the packets quickly enough causing the buffer to fill up (which forces the server to block which causes it to miss UDP packets).

How to get Acknowlegement in TCP communication in Java

I have written a socket program in Java. Both server and client can sent/receive data to each other. But I found that if client sends data to server using TCP then internally TCP sends acknowledgement to the client once the data is received by the server. I want to detect or handle that acknowledgement. How can I read or write data in TCP so that I can handle TCP acknowledgement. Thanks.
This is simply not possible, even if you were programming in C directly against the native OS sockets API. One of the points of the sockets API is that it abstracts this away for you.
The sending and receiving of data at the TCP layer doesn't necessarily correlate with your Java calls to send or receive data. The data you send in one Java call may be broken into several pieces which may be buffered, sent and received independently, or even received out of order.
See here for more discussion about this.
Any data sent over a TCP socket is acknowledged in both directions. Data sent from client to server is the same as data sent from server to client as far as TCP wire communications and application signaling. As #Eric mentions, there is no way to get at that signaling.
It may be that you are talking about timing out while waiting for the response from the server. That you'd like to detect if a response is taking too long. Is it possible that the client's message is larger than the server's response so the buffering is getting in the way of the response but not the initial request? Have you tried to use non-blocking sockets?
You might want to take a look at the NIO code if you have not already done so. It has a number of classes that give you more fine grained control over socket communications.
This is not possible in pure Java since Java's network API all handles socket, which hides all the TCP details.
You need a protocol that can handle IP-layer data so you can get TCP headers. DLPI is the most popular API to do this,
http://www.opengroup.org/onlinepubs/9638599/chap1.htm
Unfortunately, there is not Java implementation of such network. You have to use native code through JNI to do this.
I want to detect or handle that acknowledgement.
There is no API for receiving or detecting the ACKs at any level above the protocol stack.
Rethink your requirement. Knowing that the data has got to the server isn't any use to an application. What you want to know is that the peer application has received it, in which case you have to get the peer application to acknowledge at the application protocol level.

Categories