In a TCP based server-client model using Netty Channels , is there any correspondence between the number of Channel.write() from the server and the corresponding Channel.messageReceived() on the receiving client ? If I do a 10 writes() on the sender , does it mean the messageReceived() will be invoked 10 times on the listening client ? or Netty can aggregate the sent data ( from the write()s on the sender ) into more or less number of messageReceived() events on the client ? Is there a way to configure this behaviour in Netty ?
Its not guaranteed that you have a 1:1 mapping for your Channel.write(..) and messageReceived calls. You need to use some FrameDecoder subclass (maybe write your own) which will buffer the ChannelBuffer until you receive enough data to dispatch your message to the next ChannelHandlers in the ChannelPipeline on the Server.
Netty already ships with some ready to use FrameDecoder implementations like The DelimiterBasedFrameDecoder (for example) which will take care to buffer the data until it receive a Delimiter and then dispatch it to the next handlers in the ChannelPipeline.
See [1] for more details.
[1] http://netty.io/docs/stable/api/org/jboss/netty/handler/codec/frame/FrameDecoder.html
Yes, there is a way to do this, but you'll have to give us more in order to do anything else.
Related
I'm trying to write a routine that will poll incoming UDP multicast messages sent to multiple ports on a single multicast group, across all network interfaces.
I can do this using DatagramSocket, but I can't find a way to check if data is available, or to make it non-blocking. All I can do is set a timeout, call receive and wait for an exception if there's nothing there.
Usually, there is at most one port and one network interface with data, so with 4 ports and 4 network interfaces and a timeout of 50ms, it takes ~800ms to read.
If I look at equivalent C# code, there is a Socket.Available property which returns the amount of data ready to be read. If it's zero, I can skip the socket (/port/network interface) and reading is much faster.
Is there a way to do something similar in Java?
I am using grpc-streaming in java. I have a long-lasting open stream where the client and server communicate simultaneously. When I call onNext to send a message, grpc buffers the message internally and will send it on the wire async'ly. Now, if the stream is lost in the middle of sending data, onError is called. I wonder what are the right practices:
to find out which messages were sent successfully
how to retry unsent messages
Currently, I am thinking of implementing an "ack" mechanism in the application layer where for every x items received, the receiver sends back an ack message. Then in order to implement retries, I need to buffer items on the sender side and only remove them from the buffer when the ack is received. Also, on the receiver side, I need to implement a mechanism to ignore duplicate items received.
Example:
Suppose we send an ack for every 100 items sent. We receive ack on batch 3 (200-300) and then we receive an error while sending items 300-400. we try again to send items 300-400 but the client has successfully received 300-330 and it is going to receive them again. so, the client needs to ignore the first 30 items.
It is possible to implement this in the application layer. However, I am wondering if there are better practices/frameworks out there that solve this problem.
The term often used is guaranteed delivery to describe delivery data from one place to another without loss.
Your use case is similar to trying to provide guaranteed delivery over best effort delivery transport layers like UDP. The usual approach is to acknowledge every packet, although you could devise a scheme to check at a higher level as you suggest.
You also usually want to use some form of sliding window which means you don't have to wait for the previous ack before sending the next packet - this helps avoid delays.
There is a very good overview of this approach on UDP in this answer: https://stackoverflow.com/a/15630015/334402
For your case, you will receive a response for your RPC calls which will effectively be the ack - using a sliding window would allow you make th next call before you have received the ack from the previous one.
Your duplicate delivery example is also quite common - one common way to avoid double counting or getting confused is to have packet numbers and simply discard any duplicated packets.
I am using netty for developing my server.
I am looking for setting the setInterestOps for a channel.
In netty 3 there is a method call setInterestOps in Channel class.
But in netty 4 I can't find it.
Can anybody tell me where it is?
Thank you
Channel.setInterestOps() in Netty 3 was used to suspend or resume the read operation of a Netty Channel. Its name and mechanic were unnecessarily low-level, so we changed how we deal with the suspension and resumption of the inbound traffic.
First, we added a new outbound operation called read(). When read() is invoked, Netty will read inbound traffic once, and it will trigger at least one channelRead() event and a single channelReadComplete() event. Usually, you continue to read by calling ctx.read() in channelReadComplete().
However, because having to call ctx.read() for every channelReadComplete() is not very interesting, Netty has an option called autoRead, which is turned on by default. When autoRead is on, Netty will automatically trigger a read() operation on every channelReadComplete().
Therefore, If you want to suspend the inbound traffic, all you need to do is to turn the autoRead option off. To resume, turn it back on.
Use Channel.config().setAutoRead(true/false);
I'm building a UDP client that can communicate with a selection of different servers. Given that an NIO application involves using a single receive thread, how can I dispatch incoming datagrams to the correct part of my application? i.e. associate incoming packets with the outgoing packets.
In theory, when sending (or connecting?) to a server, it should be possible to get the source ip/port in the outgoing Datagram and then recognise incoming packets as their responses by inspecting the destination ip/port. (because: http://www.dcs.bbk.ac.uk/~ptw/teaching/IWT/transport-layer/source-destination.gif)
Most UDP client examples seem to assume a single server, so that identifying incoming datagrams as responses to outgoing datagrams is trivial, for example:
ByteBuffer textToEcho = ByteBuffer.wrap("blah");
ByteBuffer echoedText = ByteBuffer.allocateDirect(MAX_PACKET_SIZE);
DatagramChannel datagramChannel = DatagramChannel.open(StandardProtocolFamily.INET)
datagramChannel.connect(new InetSocketAddress(REMOTE_IP, REMOTE_PORT));
while(true)
{
int sent = datagramChannel.write(textToEcho);
datagramChannel.read(echoedText);
}
Perhaps I could use multiple DatagramChannels and iteratively call read() on each, dispatching data to the appropriate to wherever my application is expecting responses?
If you're dead-set on using just one channel (and one bound local port), you need to avoid using the connect and write methods. Instead, use the send method.
Looping through your servers, use the send method for each server. You may need to rewind() your byte buffer after each send... I'm not sure if send clones the buffer or not.
When all servers have been sent: In a loop, as long as there are servers that haven't responded, use receive to get both the data returned (buffer argument), and the server that returned it (the method return value). Keep looping until the server list is exhausted, but you want to put a time-limit on the loop itself (for dead servers or lost packets)
Ideally in the receive loop, you want the receive method to block for a short period of time before timing out. If you can't find a way to configure blocking, you could use non-blocking, and put a Thread.sleep in your loop instead. Try to get timed blocking working though, that's the best way.
You should open a separate datagram channel to each server with which you wish to communicate, and hand-off that channel's management (reading/writing) to a separate thread.
The call on ServerBootstrap.bind() returns a Channel but this is not in a Connected status and thus cannot be used for writing to client.
All the examples in Netty documentation show writing to a Channel from its ChannelHandler's events like channelConnected - I want to be able to get a connected Channel not in the event but as a reference outside the event , lets say some client code using my server component. One way is to manually code for waiting for channelConnected event and then copying the Channel reference.But this may be reinventing the wheel.
So the question is : Is there a blocking call available in Netty that returns a Connected Channel ?
edit : I am using Oio Channels , not Nio.
You could create a blocking call, but I think you maligned the event based approach too quickly. This is a contrived example, just to make sure I understand what you're trying to do:
Netty Server starts
A DataPusher service starts.
When a client connects, the DataPusher grabs a reference to the client channel and writes some data to it.
The client receives the pushed data shortly after connecting.
More or less correct ?
To do this, your DataPusher (or better yet, one of its minions) can be registered as a ChannelHandler in the server pipeline you create. Make it extend org.jboss.netty.channel.SimpleChannelHandler. The handler might look like this:
DataPusher dataPusher = getMyDataPusherReference();
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) {
dataPusher.doYourThing(e.getChannel()); // do something in another thread....
}
If you are determined to make it a blocking call from the DataPusher's perspective, just have it wait on a latch and have the minion drop the latch.
Not sure if that's what your'e looking for.....
After all the exchanges above, I still don't see that any of this is necessary.
Surely all you have to do is just accept the connection from the external service; don't register it for any events; then, when the other client connects, register for I/O events on both channels.
The external service doesn't know or care whether you are blocked in a thread waiting for another connection, or just not responding for some other reason.
If he's writing to you, his writes will succeed anyway, up to the size of your socket receive buffer, whether you are blocking or not, as long as you aren't actually reading from him. When that buffer fills up, he will block until you read some of it.
If he is reading from you, he will block until you send something, and again what you are doing in the meantime is invisible to him.
So I think you just need to simplify your thinking, and align it more with the wonderful world of non-blocking I/O.