My I/O flow is following:
client sends data#1 to channel
server(handler) receives data from database according the client data and sends it to the client
client sends data#2 to channel
server(handler) receives data again from database according the client data and sends it back to the client and closes channel
If first read in channel takes too long, ReadTimeoutHandler fires exception as expected. But if first read is ok (= fast enough) and second read in channels takes too long, no TimeoutException is thrown and handler waits 5 minutes until it closes channel. It seems that ReadTimeoutHandler only works for the first read in channel. Is it even possible to get ReadTimeoutHandler work with multiple reads in the channel?
Used Netty version: 4.0.12
public class MyServer {
private static final class MyInitializer extends ChannelInitializer<SocketChannel> {
...
#Override
public void initChannel(SocketChannel channel) throws Exception {
channel.pipeline().addLast(
new ReadTimeoutHandler(5, TimeUnit.SECONDS),
new MyHandler(server, serverConnection));
}
...
}
}
public class MyHandler extends SimpleChannelInboundHandler {
private static final Logger LOG = LoggerFactory.getLogger(MyHandler.class);
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
super.channelActive(ctx);
}
#Override
protected void channelRead0(ChannelHandlerContext ctx, String msg) throws Exception {
Message message = database.getMessage(msg);
ChannelFuture operation = ctx.writeAndFlush(message)
if (message.isEnd()) operation.addListener(new CloseConverstationListener(ctx));
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
if (cause instanceof ReadTimeoutException) {
LOG.error("ReadTimeoutException");
}
}
#Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
super.channelInactive(ctx);
}
private class CloseConverstationListener implements GenericFutureListener<ChannelFuture> {
private final ChannelHandlerContext ctx;
private CloseConverstationListener(ChannelHandlerContext ctx) {
this.ctx = ctx;
}
#Override
public void operationComplete(ChannelFuture future) throws Exception {
future.channel().close().sync();
}
}
}
The ReadTimeoutHandler behavior is - If no read happens in a channel for the specified duration it will fire exception and close the channel. It is not for the delay in responding or processing a read.
At the beginning of channel read read flag is set to true and that is set to false when when read is complete. A scheduler runs which checks if the channel is open and has not read for the specified duration then it fires the exception and closes the connection.
If first read in channel takes too long, ReadTimeoutHandler fires exception as expected.
The above does not sound correct to me.
If you want to timeout based on delay in writing a response, the you might consider WriteTimeoutHandler.
Related
i'm trying to implement pub sub pattern using grpc but i'm confusing a bit about how to do it properly.
my proto: rpc call (google.protobuf.Empty) returns (stream Data);
client:
asynStub.call(Empty.getDefaultInstance(), new StreamObserver<Data>() {
#Override
public void onNext(Data value) {
// process a data
#Override
public void onError(Throwable t) {
}
#Override
public void onCompleted() {
}
});
} catch (StatusRuntimeException e) {
LOG.warn("RPC failed: {}", e.getStatus());
}
Thread.currentThread().join();
server service:
public class Sender extends DataServiceGrpc.DataServiceImplBase implements Runnable {
private final BlockingQueue<Data> queue;
private final static HashSet<StreamObserver<Data>> observers = new LinkedHashSet<>();
public Sender(BlockingQueue<Data> queue) {
this.queue = queue;
}
#Override
public void data(Empty request, StreamObserver<Data> responseObserver) {
observers.add(responseObserver);
}
#Override
public void run() {
while (!Thread.currentThread().isInterrupted()) {
try {
// waiting for first element
Data data = queue.take();
// send head element
observers.forEach(o -> o.onNext(data));
} catch (InterruptedException e) {
LOG.error("error: ", e);
Thread.currentThread().interrupt();
}
}
}
}
How to remove clients from global observers properly? How to received some sort of a signal when connection drops?
How to manage client-server reconnections? How to force client reconnect when connection drops?
Thanks in advance!
In the implementation of your service:
#Override
public void data(Empty request, StreamObserver<Data> responseObserver) {
observers.add(responseObserver);
}
You need to get the Context of the current request, and listen for cancellation. For single-request, multi-response calls (a.k.a. Server streaming) the gRPC generated code is simplified to pass in the the request directly. This means that you con't have direct access to the underlying ServerCall.Listener, which is how you would normally listen for clients disconnecting and cancelling.
Instead, every gRPC call has a Context associated with it, which carries the cancellation and other request-scoped signals. For your case, you just need to listen for cancellation by adding your own listener, which then safely removes the response observer from your linked hash set.
As for reconnects: gRPC clients will automatically reconnect if the connection is broken, but usually will not retry the RPC unless it is safe to do so. In the case of server streaming RPCs, it is usually not safe to do, so you'll need to retry the RPC on your client directly.
In a server handler, I have the following method:
private void writeResponse(HttpObject currentObj, ChannelHandlerContext ctx) throws Exception {
Promise<String> promise = client.run(); // client.run() will return a promise
// the promise contains the result string I need for http response.
promise.sync();
// this method sends http response back, promise.getNow() is the content for the response.
writeResponse(currentObj, ctx, promise.getNow());
}
This method is to send a response after getting some data from a client (client in the code). And when I test this using browser, I did get the response content.
However, when I change it to become:
private boolean writeResponse(HttpObject currentObj, ChannelHandlerContext ctx) throws Exception {
Promise<String> promise = client.run();
promise.addListener(new FutureListener<String>() {
#Override
public void operationComplete(Future<String> future) throws Exception {
if (future.isSuccess()) {
writeResponse(currentObj, ctx, future.getNow()); // (1)
} else {
writeResponse(currentObj, ctx, "FAILED");
}
}
});
}
it didn't work anymore. From my understanding, it think the second one should also work because I've confirmed that the code did enter (1) block (the if (future.isSuccess()) block). But I didn't get any response in the browser. Can anyone explain it a little bit or point me to some references? I've found the comparison between await() and addListener in the document but it gives me the feeling that the two are similar to each other in function.
Thanks!
[update] I found this is because of this overload method:
private void writeResponse(HttpObject currentObj, ChannelHandlerContext ctx, String content) {
FullHttpResponse response = new DefaultFullHttpResponse(
HTTP_1_1, currentObj.decoderResult().isSuccess()? OK : BAD_REQUEST,
Unpooled.copiedBuffer(content, CharsetUtil.UTF_8));
response.headers().set(HttpHeaderNames.CONTENT_TYPE, "text/plain; charset=UTF-8");
if (HttpUtil.isKeepAlive(request)) {
// Add 'Content-Length' header only for a keep-alive connection.
response.headers().setInt(HttpHeaderNames.CONTENT_LENGTH, response.content().readableBytes());
response.headers().set(HttpHeaderNames.CONNECTION, HttpHeaderValues.KEEP_ALIVE);
}
ctx.write(response);
}
I shouldn't use ctx.write(response), but use ctx.writeAndFlush(response).
At first I use ctx.write(response) because I have the readComplete method to do flush for me.
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
But it seems that when using addListener instead of sync, the channelReadComplete cannot do the flush. Any idea why?
The problem is that you just call write(...) in your ChannelFutureListener and not writeAndFlush(...). Because of this your written data will never be flushed to the socket.
The channelReadComplete(...) may be triggered before the ChannelFutureListener is executed and so you end up in this situation.
So to make it short use writeAndFlush(...) in the listener.
Does anyone know why channelReadComplete gets called before channelRead in some situations? I have a server that is pretty dumb right now that another netty server connects to. The first initializer does this:
final EventExecutorGroup taskGroup = new DefaultEventExecutorGroup(50);
ChannelPipeline pipeline = ctx.pipeline();
pipeline.addLast(taskGroup, "ServerHandler", new ServerHandler());
And then in ServerHandler
public class ServerHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
System.out.println("channelRead");
// ctx.write(msg);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
System.out.println("channelReadComplete");
// ctx.flush();
}
I consistently get channelReadComplete before channelRead when the first netty server connects to the second netty server. Is there a reason for this?
I want to send more than one response to client based on back end process. But in Netty examples I saw echo server is sending back the response at the same time.
My requirement is, I need to validate the client and send him OK response, then send him the DB updates when available.
How can I send more responses to client? Pls direct me to an example or any guide?
at every point in your pipeline you can get the pipeline Channel object from the MessageEvent object (or ChannelEvent) which is passed from handler to handler. you can use this information to send multiple responses at different points in the pipeline.
if we take the echo server example as a base, we can add a handler which send the echo again (that can be done also in the same handler, but the example is to show that multiple handlers can respond).
public class EchoServerHandler extends ChannelHandlerAdapter {
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
Channel ch = e.getChannel();
// first message
ch.write(e.getMessage());
}
// ...
}
public class EchoServerHandler2 extends ChannelHandlerAdapter {
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
Channel ch = e.getChannel();
// send second message
ch.write(e.getMessage());
}
// ...
}
You can do that as long as you have the reference to the relevant Channel (or ChannelHandlerContext). For example, you can do this:
public class MyHandler extends ChannelHandlerAdapter {
...
public void channelRead(ctx, msg) {
MyRequest req = (MyRequest) msg;
ctx.write(new MyFirstResponse(..));
executor.execute(new Runnable() {
public void run() {
// Perform database operation
..
ctx.write(new MySecondResponse(...));
}
}
}
...
}
You can do this as long as Netty doesn't close the Channel. Its better you call close() yourself when you're done.
Here's a sample: https://stackoverflow.com/a/48128514/2557517
for (int i = 1; i <= 100; i++) {
ctx.writeAndFlush(Unpooled.copiedBuffer(Integer.toString(i).getBytes(Charsets.US_ASCII)));
}
ctx.writeAndFlush(Unpooled.copiedBuffer("ABCD".getBytes(Charsets.US_ASCII))).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
ctx.channel().close();
}
});
I write this in the channelRead() mehtod of my netty server handler, it will reponse "12345...100ABCD" back to the client as soon as the server receive a request.
As far as I see, the order of the message client received from the netty server is always "12345...100ABCD".
I don't know is this just by chance? Maybe sometime it would be "32451...ABCD100" (out of the server write order)?
Is it possible that the server execute
clientChannel.writeAndFlush(msg1);
clientChannel.writeAndFlush(msg2);
clientChannel.writeAndFlush(msg3);
but the client received msg2-msg1-msg3 or msg3-msg1-msg2 but not the write order msg1-msg2-msg3
In the proxy sample of netty project, https://github.com/netty/netty/tree/master/example/src/main/java/io/netty/example/proxy
the HexDumpProxyBackendHandler writes:
#Override
public void channelRead(final ChannelHandlerContext ctx, Object msg) throws Exception {
inboundChannel.writeAndFlush(msg).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
ctx.channel().read();
} else {
future.channel().close();
}
}
});
}
It makes sure that it trigger next channelRead() (That is inboundChannel.writeAndFlush(msg) in channelRead()) only if the wirteAndFlush() operation is finished.
So what's the purpose to write ctx.channel().read() in the listener and execute it when future.isSuccess() ? Isn't it to make sure that the messages writes to the client are received in a right order?
If I change it to
#Override
public void channelRead(final ChannelHandlerContext ctx, Object msg) throws Exception {
inboundChannel.writeAndFlush(msg);
ctx.channel().read();
}
Will it cause some issues?
No it is not possible. TCP ensures that.
As EJP states either technique should guarantee the ordering. The difference between the example and how you've changed it is a question of flow control.
In the original example the inbound channel will only be read after the data has been successfully flushed to the network buffers. This guarantees that it only reads data as fast as it can send it, preventing Netty's send queue from building up and thus preventing out of memory errors.
The altered code reads as soon as the write operation is queued. If the outbound channel is unable to keep up there's a chance you could see out of memory errors if you're transferring a lot of data.