I'm building a client/server type of application, where the client will be sending millions of messages. I have a thread whose purpose is only to send packets through a DatagramSocket. The problem is that, right now, the thread is calling the send() method so many types that some packets are being dropped as the internal sending buffer is full.
Is there a way, on java, to have this send() of a DatagramSocket object call block if the buffer is already full, in order to not drop packets?
You can use a DatagramChannel instead of creating the socket directly.
The channel can be set in blocking mode which, according to the description of send means the call will wait until buffer space is available.
Related
I'm programming an udp server. Right now, when it is necessary from client code to send data, every thread representing a "connection" sends a datagram to a blocking queue, and the server thread then reads every datagram and sends it.
Peeking into DatagramSocket.send i see it synchronizes over the datagrampacket, but i cannot tell if at the end of the day would be more performance-wise to queue everything vs directly sending it. With the latter i suspect i could use direct bytebuffers.
So my question is: Would it be more wise in terms of performance to queue everything or directly send it?
Just send it directly. The socket send buffer already is a queue. The complication of another queue and another thread adds no value at all. Just another thing to go wrong.
I'm building a UDP client that can communicate with a selection of different servers. Given that an NIO application involves using a single receive thread, how can I dispatch incoming datagrams to the correct part of my application? i.e. associate incoming packets with the outgoing packets.
In theory, when sending (or connecting?) to a server, it should be possible to get the source ip/port in the outgoing Datagram and then recognise incoming packets as their responses by inspecting the destination ip/port. (because: http://www.dcs.bbk.ac.uk/~ptw/teaching/IWT/transport-layer/source-destination.gif)
Most UDP client examples seem to assume a single server, so that identifying incoming datagrams as responses to outgoing datagrams is trivial, for example:
ByteBuffer textToEcho = ByteBuffer.wrap("blah");
ByteBuffer echoedText = ByteBuffer.allocateDirect(MAX_PACKET_SIZE);
DatagramChannel datagramChannel = DatagramChannel.open(StandardProtocolFamily.INET)
datagramChannel.connect(new InetSocketAddress(REMOTE_IP, REMOTE_PORT));
while(true)
{
int sent = datagramChannel.write(textToEcho);
datagramChannel.read(echoedText);
}
Perhaps I could use multiple DatagramChannels and iteratively call read() on each, dispatching data to the appropriate to wherever my application is expecting responses?
If you're dead-set on using just one channel (and one bound local port), you need to avoid using the connect and write methods. Instead, use the send method.
Looping through your servers, use the send method for each server. You may need to rewind() your byte buffer after each send... I'm not sure if send clones the buffer or not.
When all servers have been sent: In a loop, as long as there are servers that haven't responded, use receive to get both the data returned (buffer argument), and the server that returned it (the method return value). Keep looping until the server list is exhausted, but you want to put a time-limit on the loop itself (for dead servers or lost packets)
Ideally in the receive loop, you want the receive method to block for a short period of time before timing out. If you can't find a way to configure blocking, you could use non-blocking, and put a Thread.sleep in your loop instead. Try to get timed blocking working though, that's the best way.
You should open a separate datagram channel to each server with which you wish to communicate, and hand-off that channel's management (reading/writing) to a separate thread.
Suppose that I have a sender socket and receiver socket. The sender socket sends messages which are 1 GB in total, but the receiver socket neither read from nor close the socket.
What happens to the 1GB of messages before either socket closes? Are they sitting somewhere in OS buffer?
To be more specific...
Each sender has its own thread.
All senders have flushed their output stream.
All messages are passed by loopback interface
Yes the data will be sitting in buffers in the TCP/IP stack. Though it is far far less than 1Gb.
Assuming you use TCP - which employs flow control to deal with such a situation, the receiver buffers will fill up. When the receiver buffer is full the sender will stop transmitting. The sender buffer will fill up, and when it is full, the application write/send calls will block until the receiver starts consuming the data or an error occurs
I am currently trying to create a chat application using the Socket and ServerSocket classes, but i kinda ran into a roadblock. I need some kind of listener to execute a certain block of code when a message is incoming from the server or the client, but i can't seem to find one. An option would of course be to just check for incoming messages every 10 ms or something, but isn't there a smarter solution?
In general, you should assign a Thread to each Socket you are reading, so that Thread can block on the socket and wait for incoming information.
You should take a look at DataFetcher: http://tus.svn.sourceforge.net/viewvc/tus/tjacobs/io/
This class can work asynchronously, and notify a FetcherListener when new data is available
I recommend Netty or Mina. As for Socket and ServerSocket, the read() calls are blocked, so in a way the code below the read()s are executed whenever there's incoming data.
Beware of the incomplete message though, because Sockets provide a stream of bytes and the applications are usually more comfortable with discrete messages.
my application has a queue with " outgoing network packets" (A POJO with a ByteBuffer and a SocketChannel) inside, consumed by a single thread that writes the data to the SocketChannel.
I do this to guarantee that every client that should receive packets, gets its turn. This means that SocketChannel.write() writes sequentially to multiple clients (= 1 at a time).
Can anyone tell me what could go wrong working like this? The SocketChannels are created from a ServerSocketChannel, so they're blocking.
I fear that the write() operation could block for 1 client, making the other clients wait...
The write() operation can indeed block in blocking mode. If you want fairness and single threading you will have to use non-blocking mode.
If a client socket fails to consume all the data in one write (non-blocking), you could close the client. This will only happen when the buffer fills, so you could increase the send buffer of the socket to a level where you are comfortable doing this.