I am using Netty 3.6.6.Final and trying to implement write timeout for my handler such that on timeout I need to write specific response. In addition I need to cancel another write response which is currently executing in the pipeline (or will be executing).
Here is my current pipeline:
bootstrap.setPipelineFactory(new ChannelPipelineFactory() {
public ChannelPipeline getPipeline() throws Exception {
return Channels.pipeline(LOGGER,
new HttpServerCodec(),
new MyHttpContentDecoder(),
new IdleStateHandler(timer, 0, 1000, 0, TimeUnit.MILLISECONDS),
handler.get());
}
});
Handler extends IdleStateAwareChannelHandler and implementing channelIdle method where I check for write timeout:
if (e.getState() == IdleState.WRITER_IDLE) {
e.getChannel().write(SOME_RESPONSE).addListener(new ChannelFutureListener() {
public void operationComplete(ChannelFuture future) throws Exception {
future.getChannel().close();
}});
}
The question is how do I cancel write which I have planned in messageReceived method in case no timeout occurs. Is there something customary in Netty to deal with such a problem?
EDIT
Cancelling via ChannelFuture does not work. As far as I understand most of the time write will not be cancelled. During my tests it was all the time, i.e. cancel() always returned false. So I guess it is really hard to achieve it this way.
In the end I have updated the code to the latest release - 4.0.9.Final (much nicer API).
And all of the sudden, I received responses as a result of the write timeout. That didn't work this way in 3.6.6.Final.
In 4.0.9.Final the code for handling write timeout is a bit different but I always get a second write on timeout (if I comment ctx.writeAndFlush below, then I am getting write from channelRead0):
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
if (evt instanceof IdleStateEvent) {
IdleStateEvent e = (IdleStateEvent) evt;
if (e.state() == IdleState.WRITER_IDLE) {
//the following condition was always false
//(channelFuture is a state variable of my handler for previous write)
if (channelFuture != null && channelFuture.isCancellable()) {
System.out.println("Cancel "+channelFuture.cancel(true));
}
ctx.writeAndFlush(SOME_RESPONSE);
}
}
}
Don't know if it is the right way to "overwrite" first write attempt when timeout occurs, and would be glad if someone can explain why it works and what was changed in the latest release regarding this scenario.
when message is flushed, the promise is set to be uncancellable. Once a message is write out one byte, it must be write completely so that decoder can deal with a Stream-based Transport
You can try to Channel the ChannelFuture returned by the write operation.
Related
Recently i am reading the source code in OKHTTP and found that in RetryAndFollowUpInterceptor.java has a while(true) recycle. But, the program dose not stacked in that place or occur 'Application not responding'.Then I attached the debugger and put a breakpoint in that place and found it is running in main thread.I dont know why the program runs normal at all.Who can help me?
Thanks to stephenCs answer, but you might not get my point.In my question I mean why does the program not stuck in while(true) recycle in main thread.
For example, if there is a while(true) recycle in activity`s onCreate() lifecycle, the app might not run correctly ,just stuck in that place and can not respond touch event, which means the application not responding(ANR).
How will the recycle exit?
The following is the source code:
#Override
public Response intercept(Chain chain) throws IOException {
...
//This is the start of the recycle!Just recycle forever
while (true) {
if (canceled) {
streamAllocation.release();
throw new IOException("Canceled");
}
Response response;
boolean releaseConnection = true;
try {
response = realChain.proceed(request, streamAllocation, null, null);
releaseConnection = false;
} catch (RouteException e) {
// The attempt to connect via a route failed. The request will not have been sent.
if (!recover(e.getLastConnectException(), streamAllocation, false, request)) {
throw e.getFirstConnectException();
}
releaseConnection = false;
continue;
} catch (IOException e) {
// An attempt to communicate with a server failed. The request may have been sent.
boolean requestSendStarted = !(e instanceof ConnectionShutdownException);
if (!recover(e, streamAllocation, requestSendStarted, request)) throw e;
releaseConnection = false;
continue;
} finally {
// We're throwing an unchecked exception. Release any resources.
if (releaseConnection) {
streamAllocation.streamFailed(null);
streamAllocation.release();
}
...
}
}
}
If you are talking about this:
if (!recover(e.getLastConnectException(), streamAllocation, false, request)) {
throw e.getFirstConnectException();
}
releaseConnection = false;
continue;
or the similar code for IOException, the recover(...) call tests to see if failed request is recoverable. (See the method for what that actually means. But one of the criteria is that there is an alternative route that has not been tried yet.) If the call returns true, then the intercept method will retry. If it returns false then the relevant exception is rethrown.
In general, the logic in class is complicated, but it needs to be. And it clearly does work. So maybe you just need to read / study more of the context to understand what is happening.
Note that using a debugger to trace this could be difficult because the behavior is liable to change due to timeouts happening differently and altering the normal flow of control. Consider that possibility ...
UPDATE
How will the recycle exit?
It is called a loop, not a recycle.
As I have explained above, the particular loop paths that you highlighted will terminate (by rethrowing an exception) if the recover(...) call returns false. Let us take a look at recover.
private boolean recover(IOException e, boolean routeException, Request userRequest) {
streamAllocation.streamFailed(e);
// The application layer has forbidden retries.
if (!client.retryOnConnectionFailure()) return false;
// We can't send the request body again.
if (!routeException && userRequest.body() instanceof UnrepeatableRequestBody) return false;
// This exception is fatal.
if (!isRecoverable(e, routeException)) return false;
// No more routes to attempt.
if (!streamAllocation.hasMoreRoutes()) return false;
// For failure recovery, use the same route selector with a new connection.
return true;
}
The first statement is doing some cleanup on the StreamAllocation object. It isn't relevant to this.
The rest of the method is testing various things:
The application layers forbids retries, it says don't retry.
If a request was sent and it is not a resendable request, it says don't retry.
If the exception indicates an error that won't be fixed by retrying, it says don't retry.
If the route selector has no more routes to try, it says don't retry.
Otherwise it says retry. Next time that proceed is called, it will try the next route from the route selector.
Note that eventually it will run out of alternative routes to try.
It's quite appealing to use Action(s) instead of a whole Subscriber when you only need OnNext() merely because it's more readable. But of course, errors happen and if you only use Action1 you'll get an Exception in your app. do operators can be of help here. I'm only concerned these two approaches are fully the same, please confirm or disconfirm. Any pitfalls?
The first approach:
Observable
.just(readFromDatabase())
.doOnError(new Action1<Throwable>() {
#Override public void call(Throwable throwable) {
// handle error
}
}).subscribe(new Action1<SomeData>() {
#Override public void call(SomeData someData) {
// react!
}
});
The second approach:
Observable
.just(readFromDatabase())
.subscribe(new Subscriber<SomeData>() {
#Override public void onCompleted() {
// do nothing
}
#Override public void onError(Throwable e) {
// handle error
}
#Override public void onNext(SomeData someData) {
// react!
}
});
Thank you!
Both approaches aren't quite the same, and you're going to get some surprises out of the first:
First surprise will be that doOnError doesn't consume the error, but only performs some action on it. Consequently, in your case if the stream generates an error, it'll go through your doOnError code, and right after that trigger an OnErrorNotImplementedException, exactly as if the doOnError step wasn't there.
Let's say you realize that, and add an empty error handler to your subscribe call:
Observable
.just(readFromDatabase())
.doOnError(...)
.subscribe(..., error -> { /* already handled */ } );
Then you can meet the next subtle difference. do* blocks are considered part of the stream, which means that any uncatched exception in the block will result in a stream error (as opposed with exceptions thrown in 'onNext/OnError/onComplete' blocks, which get either ignored or immediately thrown, canceling the subscription on their way).
So in the above sample, if we say your database read triggers a stream error A, which gets passed to the doOnError block, which throws an exception B, then the (empty) subscription error handler we added will receive B (and only B).
The later difference isn't very concerning for doOnError (because anyway the stream gets terminated), but can be pretty surprising when occuring in doOnNext, where an exception has a very different behavior than the same exception thrown in subscribe onNext block (error'ed stream versus implicitly canceled stream).
I have a channel which needs to stay open because a lot of messages get written and I don't want to do a SSL-Handshake for every write.
If I would do this:
ChannelFuture future = channel.writeAndFlush(message1);
channel.writeAndFlush(message2);
future.addListener(new ChannelFutureListener(){
#Override
public void operationComplete(ChannelFuture channelFuture) throws Exception{
//check for success
}
});
channel.writeAndFlush(message3);
is the assumption correct, that operationComplete will only be invoked for message1 but never for message2 nor message3?
The ChannelFutureListener will only be executed for the ChannelFuture that was returned by a write. Each write will return another one. So yes.
for (int i = 1; i <= 100; i++) {
ctx.writeAndFlush(Unpooled.copiedBuffer(Integer.toString(i).getBytes(Charsets.US_ASCII)));
}
ctx.writeAndFlush(Unpooled.copiedBuffer("ABCD".getBytes(Charsets.US_ASCII))).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
ctx.channel().close();
}
});
I write this in the channelRead() mehtod of my netty server handler, it will reponse "12345...100ABCD" back to the client as soon as the server receive a request.
As far as I see, the order of the message client received from the netty server is always "12345...100ABCD".
I don't know is this just by chance? Maybe sometime it would be "32451...ABCD100" (out of the server write order)?
Is it possible that the server execute
clientChannel.writeAndFlush(msg1);
clientChannel.writeAndFlush(msg2);
clientChannel.writeAndFlush(msg3);
but the client received msg2-msg1-msg3 or msg3-msg1-msg2 but not the write order msg1-msg2-msg3
In the proxy sample of netty project, https://github.com/netty/netty/tree/master/example/src/main/java/io/netty/example/proxy
the HexDumpProxyBackendHandler writes:
#Override
public void channelRead(final ChannelHandlerContext ctx, Object msg) throws Exception {
inboundChannel.writeAndFlush(msg).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
ctx.channel().read();
} else {
future.channel().close();
}
}
});
}
It makes sure that it trigger next channelRead() (That is inboundChannel.writeAndFlush(msg) in channelRead()) only if the wirteAndFlush() operation is finished.
So what's the purpose to write ctx.channel().read() in the listener and execute it when future.isSuccess() ? Isn't it to make sure that the messages writes to the client are received in a right order?
If I change it to
#Override
public void channelRead(final ChannelHandlerContext ctx, Object msg) throws Exception {
inboundChannel.writeAndFlush(msg);
ctx.channel().read();
}
Will it cause some issues?
No it is not possible. TCP ensures that.
As EJP states either technique should guarantee the ordering. The difference between the example and how you've changed it is a question of flow control.
In the original example the inbound channel will only be read after the data has been successfully flushed to the network buffers. This guarantees that it only reads data as fast as it can send it, preventing Netty's send queue from building up and thus preventing out of memory errors.
The altered code reads as soon as the write operation is queued. If the outbound channel is unable to keep up there's a chance you could see out of memory errors if you're transferring a lot of data.
I'm planning to use Netty to design a TCP Server. When the client connects, I have to immediately start pumping
XML data to the client continuously...for hours/days. Its that simple.
So, I override "channelConnected" method and send data from that method, right?...thats great.
I will be using the following ChannelFactory
ChannelFactory factory =
new NioServerSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
NioServerSocketChannelFactory documentation says
A worker thread performs non-blocking read and write for one or more Channels in a non-blocking mode.
Good.
According to effective Java Item 51: Don't depend on the thread scheduler, I want the worker thread to do a "unit of work" and then finish/return.
So in my case, though I have to send data continuously, I want to send some chunk (lets say 1 MB) and then be done (unit of work completed), so that worker thread can return. Then I'll send another 1 MB.
Below example is from the official guide of Netty HERE.
I guess the question is then, in this scenario, if i had to unconditionally keep sending time to the client, how would I do it, considering
each send as a unit of work.
One way of doing it would be to just put a while loop and do a Thread.Sleep. Any other way?
package org.jboss.netty.example.time;
public class TimeServerHandler extends SimpleChannelHandler {
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) {
Channel ch = e.getChannel();
ChannelBuffer time = ChannelBuffers.buffer(4);
time.writeInt(System.currentTimeMillis() / 1000);
ChannelFuture f = ch.write(time);
f.addListener(new ChannelFutureListener() {
public void operationComplete(ChannelFuture future) {
Channel ch = future.getChannel();
ch.close();
}
});
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
e.getCause().printStackTrace();
e.getChannel().close();
}
}
Doing a while/sleep would work, but would not be in the Netty super-scalable style. It would be thread-per-connection programming.
Instead, schedule a periodic job on an Executor that writes a message to the channel.