I'm building a UDP client that can communicate with a selection of different servers. Given that an NIO application involves using a single receive thread, how can I dispatch incoming datagrams to the correct part of my application? i.e. associate incoming packets with the outgoing packets.
In theory, when sending (or connecting?) to a server, it should be possible to get the source ip/port in the outgoing Datagram and then recognise incoming packets as their responses by inspecting the destination ip/port. (because: http://www.dcs.bbk.ac.uk/~ptw/teaching/IWT/transport-layer/source-destination.gif)
Most UDP client examples seem to assume a single server, so that identifying incoming datagrams as responses to outgoing datagrams is trivial, for example:
ByteBuffer textToEcho = ByteBuffer.wrap("blah");
ByteBuffer echoedText = ByteBuffer.allocateDirect(MAX_PACKET_SIZE);
DatagramChannel datagramChannel = DatagramChannel.open(StandardProtocolFamily.INET)
datagramChannel.connect(new InetSocketAddress(REMOTE_IP, REMOTE_PORT));
while(true)
{
int sent = datagramChannel.write(textToEcho);
datagramChannel.read(echoedText);
}
Perhaps I could use multiple DatagramChannels and iteratively call read() on each, dispatching data to the appropriate to wherever my application is expecting responses?
If you're dead-set on using just one channel (and one bound local port), you need to avoid using the connect and write methods. Instead, use the send method.
Looping through your servers, use the send method for each server. You may need to rewind() your byte buffer after each send... I'm not sure if send clones the buffer or not.
When all servers have been sent: In a loop, as long as there are servers that haven't responded, use receive to get both the data returned (buffer argument), and the server that returned it (the method return value). Keep looping until the server list is exhausted, but you want to put a time-limit on the loop itself (for dead servers or lost packets)
Ideally in the receive loop, you want the receive method to block for a short period of time before timing out. If you can't find a way to configure blocking, you could use non-blocking, and put a Thread.sleep in your loop instead. Try to get timed blocking working though, that's the best way.
You should open a separate datagram channel to each server with which you wish to communicate, and hand-off that channel's management (reading/writing) to a separate thread.
Related
I'm building a client/server type of application, where the client will be sending millions of messages. I have a thread whose purpose is only to send packets through a DatagramSocket. The problem is that, right now, the thread is calling the send() method so many types that some packets are being dropped as the internal sending buffer is full.
Is there a way, on java, to have this send() of a DatagramSocket object call block if the buffer is already full, in order to not drop packets?
You can use a DatagramChannel instead of creating the socket directly.
The channel can be set in blocking mode which, according to the description of send means the call will wait until buffer space is available.
I'm trying to write a routine that will poll incoming UDP multicast messages sent to multiple ports on a single multicast group, across all network interfaces.
I can do this using DatagramSocket, but I can't find a way to check if data is available, or to make it non-blocking. All I can do is set a timeout, call receive and wait for an exception if there's nothing there.
Usually, there is at most one port and one network interface with data, so with 4 ports and 4 network interfaces and a timeout of 50ms, it takes ~800ms to read.
If I look at equivalent C# code, there is a Socket.Available property which returns the amount of data ready to be read. If it's zero, I can skip the socket (/port/network interface) and reading is much faster.
Is there a way to do something similar in Java?
I'm programming an udp server. Right now, when it is necessary from client code to send data, every thread representing a "connection" sends a datagram to a blocking queue, and the server thread then reads every datagram and sends it.
Peeking into DatagramSocket.send i see it synchronizes over the datagrampacket, but i cannot tell if at the end of the day would be more performance-wise to queue everything vs directly sending it. With the latter i suspect i could use direct bytebuffers.
So my question is: Would it be more wise in terms of performance to queue everything or directly send it?
Just send it directly. The socket send buffer already is a queue. The complication of another queue and another thread adds no value at all. Just another thing to go wrong.
I am currently trying to create a chat application using the Socket and ServerSocket classes, but i kinda ran into a roadblock. I need some kind of listener to execute a certain block of code when a message is incoming from the server or the client, but i can't seem to find one. An option would of course be to just check for incoming messages every 10 ms or something, but isn't there a smarter solution?
In general, you should assign a Thread to each Socket you are reading, so that Thread can block on the socket and wait for incoming information.
You should take a look at DataFetcher: http://tus.svn.sourceforge.net/viewvc/tus/tjacobs/io/
This class can work asynchronously, and notify a FetcherListener when new data is available
I recommend Netty or Mina. As for Socket and ServerSocket, the read() calls are blocked, so in a way the code below the read()s are executed whenever there's incoming data.
Beware of the incomplete message though, because Sockets provide a stream of bytes and the applications are usually more comfortable with discrete messages.
I develop part of some JBoss+EJB based enterprise application. My module needs to process huge amount of incoming UDP packets. I've done some load testing and it looks that in case of sending packets with 11ms interval everything is fine, but in case of 10ms interval some packets are lost. It's rather strange in my opinion, but I done 10/11ms interval load tests comparison several times and it is always the same result (10 ms - some "lost" packets, 11ms - everything's fine).
If it was something wrong with synchronization, I'd expect that it will also be visible in case of 11ms tests (at least one packet lost, or at least one wrong counter value).
So if it is not because of synchronization, then maybe DatagramSocket through which I receive packets doesn't work as expected.
I found that receive buffer size (SO_RCVBUF) has default 57344 value (probably it's underlying IO network buffers dependent). I suspect, that maybe when this buffer goes full, then new incoming UDP datagrams are rejected. I tried set this value to some higher, but I noticed that if I exaggerate, buffer returns to its default size. If it's underlying layer dependent how can I find out maximum buffer size for certain OS/network card from JBoss level?
Is it possible that it is caused by receive buffer size, or maybe 57344 value is big enough to handle most cases? Do you have any experience with such issues?
There is no timeout set on my DatagramSocket. My UDP datagrams contains about 70 bytes of data (value without datagram header included).
[Edited]
I have to use UDP because I receive Cisco Netflow data - it is protocol used by network devices to send some traffic statistics. Also, I have no influence on sent bytes format (e.g. I cannot add counters for packets and so on). It is not expected that all packets will be processed (some datagrams may be lost), but I'd expect that I will process most of packets. During 10ms interval tests, about 30% of packets were lost.
It is not very possible that slow processing causes this issue. Currently singleton component holds reference to DatagramSocket calling receive method in a loop. When packet is received, it is passed to the queue, and processed by picked from pool stateless component. "Facade" Singleton is only responsible for receiving packets and passing it on to the processing (it does not wait for processing complete event).
Thanks in advance,
Piotr
UDP does not guarantee delivery, so you can tweak parameters, but you can't guarantee that the message will get delivered, especially in the case of very large data transfers.
If you need to guarantee delivery, you should use TCP instead.
If you need (or want) to use UDP, you can encode each packet with a number, and also send the number of packets expected. For example, if you sent 10 large packets, you could include the information: packet 1/10, packet 2/10, etc. This way you can at least tell if you have not received all of the packets. If you have not received them, you could send a request to resend those missing packets.
UDP is inherently unreliable.
Datagrams can be thrown away at any point between sender and receiver, even within the receiver at a level below your code. Setting the recv buffer to a larger size is likely to help the networking code within your machine buffer more datagrams but you should expect that some datagrams will be lost anyway.
If your recv logic takes too long (i.e. longer than it takes for a new datagram to arrive) then you'll always be behind and you'll always miss datagrams eventually. All you can do is make sure that your recv code runs as fast as possible, perhaps move the inbound datagram to a queue and process it 'later' or on another thread but then that will just move your problem to being one where you have a queue that keeps growing.
[Re your edit...] And what's processing your queue and how does the locking work between the producer and the consumers? Change your code so that the recv logic simply increments a count and discards the data and loops back around and see if you're losing fewer datagrams; either way, UDP is unreliable, you WILL have datagrams that are discarded and you should just expect that and deal with it. Worrying about it means you're focusing on the wrong problem; make use of the data you DO get and assume that you wont get much of it and then your program will work even if the network gets congested and MOST of your datagrams get discarded.
In summary, that's just how is it with UDP.
It appears in your tests that only up to two packets can be in the buffer so if each packet is less than 28KB this should be fine.
As you know UDP is lossy, but you should be able to send more than one packet per 10 ms. I suggest you write a simple receiver which just listens to packets just to determine if its your application or something at the network/OS level. (I suspect the later)
I don't know Java but ... does the API allow you to invoke an asynch listen/receive for a datagram:
Use O/S API to do a receive (passing your application-level buffer as a paremeter)
(Wait while there's nothing to receive...)
(O/S receives something from the network...)
O/S puts the received packet into the buffer and completes/returns your API call
If that's true then I suggest that you do several concurrent instances of the API call, so that there are several concurrent application-level buffers into which multiple packets can be received.