netty idlestatehandler invalid delay time - java

i have an example that uses java 1.8 and netty 4.1.30.Final version IdleStateHandler to output the current time when no action is taken for 400 milliseconds. However, the current time is output at intervals of 2 seconds instead of 400 milliseconds.
here is my example code
Client.java
public void connect() {
EventLoopGroup workerGroup = new OioEventLoopGroup();
try {
Bootstrap bootstrap = new Bootstrap();
bootstrap.group(workerGroup)
.channel(RxtxChannel.class)
.option(RxtxChannelOption.BAUD_RATE, 38400)
.option(RxtxChannelOption.DATA_BITS, RxtxChannelConfig.Databits.DATABITS_8)
.option(RxtxChannelOption.PARITY_BIT, RxtxChannelConfig.Paritybit.NONE)
.option(RxtxChannelOption.STOP_BITS, RxtxChannelConfig.Stopbits.STOPBITS_1)
.handler(new ExampleChannelInitializer());
this.channel = bootstrap.connect(new RxtxDeviceAddress("COM1")).sync().channel();
this.channel.closeFuture().addListener(f -> {
workerGroup.shutdownGracefully();
});
} catch (Exception e) {
throw new ConnectionException(e.getMessage(), e);
}
}
ExampleChannelInitializer.java
#Override
protected void initChannel(RxtxChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast(new IdleStateHandler(0, 0, 400, TimeUnit.MILLISECONDS));
pipeline.addLast(new ChannelInboundHandlerAdapter() {
#Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
System.out.println(LocalDateTime.now());
}
});
}
Console
2018-10-30T10:42:02.762
2018-10-30T10:42:04.789
2018-10-30T10:42:06.818
2018-10-30T10:42:08.844
2018-10-30T10:42:10.871

This is unfortunately just a matter of fact how the OIO transport and so RXTX work under the hood. You can make these more "precise" by using RxtxChannelOption.READ_TIMEOUT and RxtxChannelOption.WAIT_TIME and set these to some smaller value.

Related

Shutting down Netty server itself

I'm using Netty server to solve this: reading a big file line by line and processing it. Doing it on single machine is still slow, so I've decided to use server to serve chunks of data to clients. That already works, but what I also want is that server shut downs itself when processed whole file. The source code I'm using right now is:
public static void main(String[] args) {
new Thread(() -> {
//reading the big file and populating 'dq' - data queue
}).start();
final EventLoopGroup bGrp = new NioEventLoopGroup(1);
final EventLoopGroup wGrp = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.option(ChannelOption.SO_BACKLOG, 100)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) {
ChannelPipeline p = ch.pipeline();
p.addLast(new StringDecoder());
p.addLast(new StringEncoder());
p.addLast(new ServerHandler(dq, bGrp, wGrp));
}
});
ChannelFuture f = b.bind(PORT).sync();
f.channel().closeFuture().sync();
} finally {
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
class ServerHandler extends ChannelInboundHandlerAdapter {
public ServerHandler(dq, bGrp, wGrp) {
//assigning params to instance fields
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
//creating bulk of data from 'dq' and sending to client
/* e.g.
ctx.write(dq.get());
*/
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
if (dq.isEmpty() /*or other check that file was processed*/ ) {
try {
ctx.channel().closeFuture().sync();
} catch (InterruptedException ie) {
//...
}
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
ctx.close();
ctx.executor().parent().shutdownGracefully();
}
}
Is the server shutdown in channelReadComplete(...) method correct? What I'm afraid is that there can still be another served client (e.g. sending big bulk in other client and with current client reached the end of 'dq').
The base code is from netty EchoServer/DiscardServer examples.
The question is : how to shut down netty server (from handler) when reached specific condition.
Thanks
You can not shutdown a server from a handler. What you could do is signal to a different thread that it should shutdown the server.

SimpleChannelInboundHandler never fires channelRead0

on some day i decided to create a Netty Chat server using Tcp protocol. Currently, it successfully logging connect and disconnect, but channelRead0 in my handler is never fires. I tried Python client.
Netty version: 4.1.6.Final
Handler code:
public class ServerWrapperHandler extends SimpleChannelInboundHandler<String> {
private final TcpServer server;
public ServerWrapperHandler(TcpServer server){
this.server = server;
}
#Override
public void handlerAdded(ChannelHandlerContext ctx) {
System.out.println("Client connected.");
server.addClient(ctx);
}
#Override
public void handlerRemoved(ChannelHandlerContext ctx) {
System.out.println("Client disconnected.");
server.removeClient(ctx);
}
#Override
public void channelRead0(ChannelHandlerContext ctx, String msg) {
System.out.println("Message received.");
server.handleMessage(ctx, msg);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
System.out.println("Read complete.");
super.channelReadComplete(ctx);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
Output:
[TCPServ] Starting on 0.0.0.0:1052
Client connected.
Read complete.
Read complete.
Client disconnected.
Client code:
import socket
conn = socket.socket()
conn.connect(("127.0.0.1", 1052))
conn.send("Hello")
tmp = conn.recv(1024)
while tmp:
data += tmp
tmp = conn.recv(1024)
print(data.decode("utf-8"))
conn.close()
Btw, the problem was in my initializer: i added DelimiterBasedFrameDecoder to my pipeline, and this decoder is stopping the thread. I dont know why, but i dont needed this decoder, so i just deleted it, and everything started to work.
#Override
protected void initChannel(SocketChannel ch) throws Exception {
// Create a default pipeline implementation.
ChannelPipeline pipeline = ch.pipeline();
// Protocol Decoder - translates binary data (e.g. ByteBuf) into a Java object.
// Protocol Encoder - translates a Java object into binary data.
// Add the text line codec combination first,
pipeline.addLast("framer", new DelimiterBasedFrameDecoder(8192, Delimiters.lineDelimiter())); //<--- DELETE THIS
pipeline.addLast("decoder", new StringDecoder());
pipeline.addLast("encoder", new StringEncoder());
pipeline.addLast("handler", new ServerWrapperHandler(tcpServer));
}

Netty : Why use ChannelInboundHandlerAdapter in TimeServerHandler?

I am going through netty's documentation here and the diagram here.
My question is, the Timeserver is writing time into the socket, for the client to read the time. Shouldn't it use the ChannelOutboundHandlerAdapter ? Why is the logic in ChannelInboundHandlerAdapter ?
Couldn't understand, please explain.
Timeserver,
public class TimeServerHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelActive(final ChannelHandlerContext ctx) { // (1)
final ByteBuf time = ctx.alloc().buffer(4); // (2)
time.writeInt((int) (System.currentTimeMillis() / 1000L + 2208988800L));
final ChannelFuture f = ctx.writeAndFlush(time); // (3)
f.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) {
assert f == future;
ctx.close();
}
}); // (4)
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
TimeClient,
public class TimeClient {
public static void main(String[] args) throws Exception {
String host = args[0];
int port = Integer.parseInt(args[1]);
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
Bootstrap b = new Bootstrap(); // (1)
b.group(workerGroup); // (2)
b.channel(NioSocketChannel.class); // (3)
b.option(ChannelOption.SO_KEEPALIVE, true); // (4)
b.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new TimeClientHandler());
}
});
// Start the client.
ChannelFuture f = b.connect(host, port).sync(); // (5)
// Wait until the connection is closed.
f.channel().closeFuture().sync();
} finally {
workerGroup.shutdownGracefully();
}
}
}
The reason we use a ChannelInboundHandlerAdapter is because, we are writing into the channel which was established by the client to the server. Since its inbound with respect to the server, we use a ChannelInboundHandlerAdapter. The Client connects to the Server, through the channel into which the server sends out the time.
Because your server will respond to incoming messages, it will need to implement interface ChannelInboundHandler, which defines methods for acting on inbound events.
Further, ChannelInboundHandlerAdapter has a straightforward API, and each of its methods can be overridden to hook into the event lifecycle at the appropriate point.
I just started learning Netty, hope this helps. :)

Netty TCP server characters become garbage

I have assigned to do TCP server in my organization to receive text message and split them. But unfortunately some of my
message characters become garbage (I have used JMeter as my TCP client). I have 2 questions related to this problem. Any help is highly appreciated.
Why we can not split my message using "»" (u00BB) character? It never worked and how we could use "»" as delimiter in DelimiterBasedFrameDecoder?
Why we receive garbage characters although I used UTF-8 in encoding/decoding? (Only manage to receive messages when I comment "pipeline.addLast("frameDecoder", new io.netty.handler.codec.DelimiterBasedFrameDecoder( 500000, byteDeli)" )
Sample request:
pov1‹1‹202030‹81056581‹0‹6‹565810000011‹0‹130418135639‹3‹4‹0‹cha7373737›chaE15E2512380›1›1«ban7373737›banE15E2512380›2›2«ind7373737›indE15E2512380›3›3»
Eclipse cosole: Recieved Request ::::::
pov1�1�202030�81056581�0�6�565810000011�0�130418135639�3�4�0�cha7373737�chaE15E2512380�1�1�ban7373737�banE15E2512380�2�2�ind7373737�indE15E2512380�3�3�
Server class:-
public void run() {
try {
System.out.println("2:run");
bootstrap
.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch)
throws Exception {
ChannelPipeline pipeline = ch.pipeline();
DTMTCPServiceHandler serviceHandler = context
.getBean(DTMTCPServiceHandler.class);
pipeline.addFirst(new LoggingHandler(
LogLevel.INFO));
byte[] delimiter = "\u00BB".getBytes(CharsetUtil.UTF_8);//»
ByteBuf byteDeli = Unpooled.copiedBuffer(delimiter);
pipeline.addLast(
"frameDecoder",
new io.netty.handler.codec.DelimiterBasedFrameDecoder(
500000, byteDeli)); // Decoders
pipeline.addLast("stringDecoder",
new StringDecoder(CharsetUtil.UTF_8));
pipeline.addLast("stringEncoder",
new StringEncoder(CharsetUtil.UTF_8));
pipeline.addLast("messageHandler",
serviceHandler);
}
}).option(ChannelOption.SO_BACKLOG, 128)
.childOption(ChannelOption.SO_KEEPALIVE, true);
serverChannel = bootstrap.bind(7070).sync().channel()
.closeFuture().sync().channel();
} catch (InterruptedException e) {
//error
logger.error("POSGatewayServiceThread : InterruptedException",
e);
System.out.println(e);
} finally {
//finally
System.out.println("finally");
serverChannel.close();
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
Handler class
public class DTMTCPServiceHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
String posMessage = msg.toString();
System.out.println("Recieved Request :::::: " + posMessage);
String response = "-";
ByteBuf copy = null;
try {
//Called to separate splitter class
response = dtmtcpServiceManager.manageDTMTCPMessage(posMessage);
copy = Unpooled.copiedBuffer(response.getBytes());
} finally {
logger.info("Recieved Response :::::: " + response);
ctx.write(copy);
ctx.flush();
}
}
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
//Open
super.channelActive(ctx);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
//End
super.channelReadComplete(ctx);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
//exception
ctx.close();
}
}
Found the problem and it is not related to netty. Error is with the JMeter encoding. managed to solve this after modifying the "jmeter.properties" property file #\apache-jmeter-x.xx\bin.
tcp.charset=UTF-8
Sorry to trouble you guys, since false is with me.

Netty 4 ReadTimeoutHandler not throwing in OioEventLoopGroup

I'm new to netty. Is this an expected behaviour?
A bit more detailed:
public class Test {
public static void connect(){
EventLoopGroup workerGroup = new NioEventLoopGroup();
Bootstrap bs = new Bootstrap();
bs.group(workerGroup);
bs.channel(NioSocketChannel.class);
bs.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 10000);
bs.handler( new ChannelInitializer<SocketChannel>(){
#Override
protected void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pl = ch.pipeline();
pl.addLast("readTimeoutHandler", new ReadTimeoutHandler(1000,
TimeUnit.MILLISECONDS));
pl.addLast("framer", new DelimiterBasedFrameDecoder(
16384, Delimiters.lineDelimiter()));
pl.addLast("string-decoder", new StringDecoder());
pl.addLast("handler",
new SimpleChannelInboundHandler<String> (String.class){
#Override
protected void channelRead0(ChannelHandlerContext ctx,
String msg) throws Exception {
System.out.println(msg);
}
#Override
protected void exceptionCaught(ChannelHandlerContext ctx,
Throwable cause) throws Exception {
if(cause instanceof ReadTimeoutException){
System.out.println("Timed out.");
}
ctx.close();
}
});
}
});
bs.connect("127.0.0.1", 45001);
}
}
This is just test case, so it might be a bit incorrect, pipeline ressembles my actual pipeline close enough though.
Basicly if I change EventLoopGroup initialization from NioEventLoopGroup to OioEventLoopGroup and bootstrap channel setup from bootstrap.channel(NioSocketChannel.class) to bootstrap.channel(OioSocketChannel.class) without touching anything else, ReadTimeoutHandler stops throwing ReadTimeoutExceptions.
This was fixed in Netty 4.0.4.Final . Please upgrade, see [1].
[1] https://github.com/netty/netty/issues/1614

Categories