I am using netty for developing my server.
I am looking for setting the setInterestOps for a channel.
In netty 3 there is a method call setInterestOps in Channel class.
But in netty 4 I can't find it.
Can anybody tell me where it is?
Thank you
Channel.setInterestOps() in Netty 3 was used to suspend or resume the read operation of a Netty Channel. Its name and mechanic were unnecessarily low-level, so we changed how we deal with the suspension and resumption of the inbound traffic.
First, we added a new outbound operation called read(). When read() is invoked, Netty will read inbound traffic once, and it will trigger at least one channelRead() event and a single channelReadComplete() event. Usually, you continue to read by calling ctx.read() in channelReadComplete().
However, because having to call ctx.read() for every channelReadComplete() is not very interesting, Netty has an option called autoRead, which is turned on by default. When autoRead is on, Netty will automatically trigger a read() operation on every channelReadComplete().
Therefore, If you want to suspend the inbound traffic, all you need to do is to turn the autoRead option off. To resume, turn it back on.
Use Channel.config().setAutoRead(true/false);
Related
I have implemented a retry mechanism which works well based on the following:
https://github.com/spring-projects/spring-integration-samples/issues/237
The application consumes events from kafka, transforms those events and sends them as an HTTP request to a remote service, so it's in the integration flow that sends the HTTP request where the retry mechanism is implemented.
I was worried about sending the requests to the remote service in the same order as they come in from kafka during a temporary failure (network glitch) to avoid an overriding, but fortunately it looks like the order is kept, keep me honest here.
It seems that during the retry process all events coming in are "put on hold" and once the remote service is back up before the last try, all events are sent.
I would like to know two things here:
Am I correct with my assumption? Is this how the retry mechanism works by default?
I'm worried about the events getting back (or stack) up due to the amount of time it takes to finish the current flow execution. Is there something here I should take into consideration?
I think I might use an ExecutorChannel so that events could get processed in parallel, but by doing that I wouldn't be able to keep the order of the events.
Thanks.
Your assumption is correct. The retry is done withing the same thread and it is blocked for the next event until the send is successful or retry is exhausted. And it is really done in the same Kafka consumer thread, so new records are not pulled from the topic until retry is done.
It is not a correct architecture to shift the logic into a new thread, e.g. using an ExecutorChannel since Kafka is based on an offset commit which cannot be done out of order.
I read from a stream created by Socket.getInputStream(). When I use it, it blocks until it gets new data(exactly what it should). Now I need the stream to read something (see below). But when I start a new read it will give me unspecified output(or not?). My question is:
How can I interrupt the actual read, so I can use the read method?
Details: I connect to a server and send commands to it. From time to time the server sends messages to my client (event notifications), which I need to register. I want to be able to send commands while I'm waiting for this messages. When I send a command the answer to this command is read from the stream. And here is the problem: I'm still listening to the messages while I try to read my answer. So I need something that interrupts the current read.
The problem with stopping the event processor from reading is that you introduce a race condition: What happens if the server sends an event right after you terminated the read? The "response" that you read would wind up being an event.
The proper way to do this is to do all your reading, both events and responses, in one place and handle the responses like an event also. Right before you send a command, register a listener for the response, then send the command. When the reading thread sees a response, have it find the proper listener and notify it that the response has been received.
Easiest, best way to handle this IMO is to use an asynchronous listener with event callbacks. DataFetcher is an implementation (Also see Timeout and PartialReadException, dependencies in the same project/package, and IOUtils, which has capabilities to directly connect a FetcherListener with an InputStream)
The call on ServerBootstrap.bind() returns a Channel but this is not in a Connected status and thus cannot be used for writing to client.
All the examples in Netty documentation show writing to a Channel from its ChannelHandler's events like channelConnected - I want to be able to get a connected Channel not in the event but as a reference outside the event , lets say some client code using my server component. One way is to manually code for waiting for channelConnected event and then copying the Channel reference.But this may be reinventing the wheel.
So the question is : Is there a blocking call available in Netty that returns a Connected Channel ?
edit : I am using Oio Channels , not Nio.
You could create a blocking call, but I think you maligned the event based approach too quickly. This is a contrived example, just to make sure I understand what you're trying to do:
Netty Server starts
A DataPusher service starts.
When a client connects, the DataPusher grabs a reference to the client channel and writes some data to it.
The client receives the pushed data shortly after connecting.
More or less correct ?
To do this, your DataPusher (or better yet, one of its minions) can be registered as a ChannelHandler in the server pipeline you create. Make it extend org.jboss.netty.channel.SimpleChannelHandler. The handler might look like this:
DataPusher dataPusher = getMyDataPusherReference();
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) {
dataPusher.doYourThing(e.getChannel()); // do something in another thread....
}
If you are determined to make it a blocking call from the DataPusher's perspective, just have it wait on a latch and have the minion drop the latch.
Not sure if that's what your'e looking for.....
After all the exchanges above, I still don't see that any of this is necessary.
Surely all you have to do is just accept the connection from the external service; don't register it for any events; then, when the other client connects, register for I/O events on both channels.
The external service doesn't know or care whether you are blocked in a thread waiting for another connection, or just not responding for some other reason.
If he's writing to you, his writes will succeed anyway, up to the size of your socket receive buffer, whether you are blocking or not, as long as you aren't actually reading from him. When that buffer fills up, he will block until you read some of it.
If he is reading from you, he will block until you send something, and again what you are doing in the meantime is invisible to him.
So I think you just need to simplify your thinking, and align it more with the wonderful world of non-blocking I/O.
In a TCP based server-client model using Netty Channels , is there any correspondence between the number of Channel.write() from the server and the corresponding Channel.messageReceived() on the receiving client ? If I do a 10 writes() on the sender , does it mean the messageReceived() will be invoked 10 times on the listening client ? or Netty can aggregate the sent data ( from the write()s on the sender ) into more or less number of messageReceived() events on the client ? Is there a way to configure this behaviour in Netty ?
Its not guaranteed that you have a 1:1 mapping for your Channel.write(..) and messageReceived calls. You need to use some FrameDecoder subclass (maybe write your own) which will buffer the ChannelBuffer until you receive enough data to dispatch your message to the next ChannelHandlers in the ChannelPipeline on the Server.
Netty already ships with some ready to use FrameDecoder implementations like The DelimiterBasedFrameDecoder (for example) which will take care to buffer the data until it receive a Delimiter and then dispatch it to the next handlers in the ChannelPipeline.
See [1] for more details.
[1] http://netty.io/docs/stable/api/org/jboss/netty/handler/codec/frame/FrameDecoder.html
Yes, there is a way to do this, but you'll have to give us more in order to do anything else.
I'm writing a UDP client-server pair for a networks class, and I have hit on a problem. This is a rather unorthodox networks assignment, so a little background first:
The goal is to create a server to implement push-based notifications. The key point here is that the server has to contact the client at whatever address it was last seen, as well listen for the client's control packets. So therefore, I have a thread running on the client periodically sending out UDP packets to the server, which logs their origin for when it needs to send out a response. This technique also busts through NAT's, as the send refreshes the address translation.
So then, here is my dilemma: Unless I'm mistaken, the NAT maps its own address and a generated port number onto it's clients address port combination. Therefore, in order to successfully traverse the NAT, I need to move all my packets through one port on the client machine. The updater thread would simply have to listen for a time, push out an update packet, and go back to listening.
Then here is where it get hairy. If the original thread, which wants to perform some action, wants the port, it has to wake the announcer, which is blocking while waiting for the response.
How can I pull this off in Java?
P.S.: If it turns out that the NAT would allow a communication on a different port to go through, then things are awesome.
Note: I am not necessarily telling you this is the right way to solve your larger problem.
But the answer to your top-line question, "How do I signal a sleeping thread in Java" is: Call interrupt() on the thread. You'll need a more elaborate mechanism in place to communicate why it has been interrupted, but that's a start. interrupt() will wake a sleep()ing or wait()ing thread with an InterruptedException, but I don't think that's really what you're asking.
This will not wake up a thread blocked on a read() call, say a socket. It sounds like you are using a DatagramSocket, in which case you have a couple of options:
Use a non-blocking implementation. (aka, "Selector-based", or New I/O (nio) in Java lingo) See e.g. DatagramChannel; also maybe this SO question and/or this one
Use normal Java I/O, set a socket timeout of suitable length, and wrap your calls to read() in a loop, checking for the appropriate condition.
Look at the following links :
Thread signaling
What is a condition variable in java
Java signal handling
How is the thread 'sleeping'?
Typically, inter-thread cooperation revolves around
wait() and notify() calls.
Selectors are one approach I would consider. Haven't used Java's version yet, so take this with a grain of salt.
You could have one selector watching both the UDP channel and an in-process channel, waking up on activity of either.
There's an introduction to selectors halfway down http://java.sun.com/developer/technicalArticles/releases/nio/ . See also the API docs of AbstractSelector and its interface.