I read the documentation of IdleStateHandlerand from my Server I implemented it same as from the documentation,
but I don't understand on how can I exactly tell if the Client become disconnected for example the Client loses the Wifi Connectivity.
from my understanding, inside my Handler, the method channelInactive() was trigger when the client become disconnected,
then using IdleStateHandler, the IdleState.READER_IDLE will be triggered when no read was performed for the specified period of time,
then after 3 seconds of no read from the client I closed the channel and was expecting that the channelInactive will be trigger but it's not, why?.
Initializer
public class ServerInitializer extends ChannelInitializer<SocketChannel> {
String TAG = "LOG: ";
#Override
protected void initChannel(SocketChannel ch) throws Exception {
System.out.println(TAG + "Starting ServerInitializer class...");
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast("decoder", new ObjectDecoder(ClassResolvers.cacheDisabled(null)));
pipeline.addLast("encoder", new ObjectEncoder());
pipeline.addLast("idleStateHandler", new IdleStateHandler(6, 3, 0, TimeUnit.SECONDS));
pipeline.addLast("handler", new ServerHandler());
}
}
Handler
public class ServerHandler extends ChannelInboundHandlerAdapter {
private String TAG = "LOG: ";
public ServerHandler(){}
#Override
public void channelActive(ChannelHandlerContext ctx) {
Log.w(TAG,"New Client become connected, Sending a message to the Client. Client Socket is: " + ctx.channel().remoteAddress().toString());
List<String> msg = new ArrayList<>();
msg.add(0,"sample message 1");
msg.add(1,"sample message 2");
sendMessage(ctx, msg);
}
public void sendMessage(ChannelHandlerContext ctx, List message){
ctx.write(message);
ctx.flush();
}
#Override
public void channelInactive(ChannelHandlerContext ctx) {
Log.w(TAG,"A Client become disconnected. Client Socket is: " + ctx.channel().remoteAddress().toString() + " id: " + (String.valueOf(ctx.channel().hashCode())));
//COnnection id dead, do something here...
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object object) { // (2)
Log.w(TAG, "CLIENT: "+ ctx.channel().remoteAddress().toString() + " SAYS: " + object);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) { // (4)
// Close the connection for that client when an exception is raised.
Log.e(TAG,"Something's wrong, CLIENT: "+ ctx.channel().remoteAddress().toString() + " CAUSE: " + cause.toString());
ctx.close();
}
#Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
Log.w(TAG,"LOLO");
if (evt instanceof IdleStateEvent) {
IdleStateEvent e = (IdleStateEvent) evt;
if (e.state() == IdleState.READER_IDLE) {
ctx.close(); //Closed the Channel so that the `channelInactive` will be trigger
} else if (e.state() == IdleState.WRITER_IDLE) {
ctx.writeAndFlush("ping\n"); //Send ping to client
}
}
}
}
Anyone can help me out
IdleStateHandler should always be the first handler in your pipeline.
Use the ReadTimeoutHandler instead of IdleStateHandler and override the exceptionCaught method.
Related
I want to close the channel when it hasn't received any data after certain seconds. I tried IdleHandler, but it isn't working. My main handler is clientHandler which extends SimpleChannelInboundHandler. This sends data in string and receives data in String format. Sometimes, I don't get the data during that time I want my channel to close after certain timeout, but currently it is waiting for the data from the server.
One more observation, When I check in the packet sender to verify for the same request. I get empty response from the server but this response is not received by my ClientHandler.
Following is the code.
clientBootstrap.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch){
ch.pipeline()
.addLast(new IdleStateHandler(5, 5, 10))
.addLast(new MyHandler())
.addLast(new ClientHandler(cardIssueRequest,promise));
}
});
MyHandler:
public class MyHandler extends ChannelDuplexHandler {
#Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) {
if (evt instanceof IdleStateEvent) {
IdleStateEvent e = (IdleStateEvent) evt;
if (e.state() == IdleState.READER_IDLE) {
ctx.close();
} else if (e.state() == IdleState.WRITER_IDLE) {
ctx.close();
}
}
}
}
ClientHandler:
public class ClientHandler extends SimpleChannelInboundHandler {
RequestModel request;
private final Promise<String> promise;
public ClientHandler(RequestModel request, Promise<String> promise) {
this.request = request;
this.promise = promise;
}
#Override
protected void channelRead0(ChannelHandlerContext channelHandlerContext, Object o) {
String response = ((ByteBuf) o).toString(CharsetUtil.UTF_8);
log.info("Client received: " + response);
promise.trySuccess(response);
}
#Override
public void channelActive(ChannelHandlerContext channelHandlerContext) {
log.info("Client sent: " + request);
channelHandlerContext.writeAndFlush(Unpooled.copiedBuffer((request.toString()), CharsetUtil.UTF_8));
}
#Override
public void exceptionCaught(ChannelHandlerContext channelHandlerContext, Throwable cause) {
cause.printStackTrace();
channelHandlerContext.close();
promise.setFailure(cause);
}
}
After taking the thread dump, I found the issue was that my program was waiting in the promise statement. So, after setting timeout for the promise, my issue got solved.
promise.get(60, TimeUnit.SECONDS)
I just started using Netty. It looks very interesting but I encountered a problem. After the client is connected, a message should be received but that's not happening. For some reason connection works fine; it says Channel Connected.
Server:
this.bossgroup = new NioEventLoopGroup();
ServerBootstrap bootstrap = new ServerBootstrap()
.group(bossgroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
protected void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast("framer", new DelimiterBasedFrameDecoder(8192, Delimiters.lineDelimiter()))
.addLast("decoder", new StringDecoder())
.addLast("encoder", new StringEncoder())
.addLast("timer", new ReadTimeoutHandler(10))
.addLast("handler", new ChannelHandler());
}
});
ChannelHandler :
public class ChannelHandler extends SimpleChannelInboundHandler<String> {
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
System.out.println("[+] Channel connected: " + ctx.channel().remoteAddress());
new ClientHandler(ctx.channel());
}
#Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
System.out.println("Channel disconnected: " + ctx.channel().remoteAddress() + " [-]");
ctx.close();
}
#Override
protected void messageReceived(ChannelHandlerContext ctx, String msg) throws Exception {
System.out.println(ctx.channel().remoteAddress() + ": " + msg);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
System.out.println("Exception caught, closing channel." + cause);
ctx.close();
}
}
Solved By:
ChannelInboundHandlerAdapter
Instead Of:SimpleChannelInboundHandler
I have assigned to do TCP server in my organization to receive text message and split them. But unfortunately some of my
message characters become garbage (I have used JMeter as my TCP client). I have 2 questions related to this problem. Any help is highly appreciated.
Why we can not split my message using "»" (u00BB) character? It never worked and how we could use "»" as delimiter in DelimiterBasedFrameDecoder?
Why we receive garbage characters although I used UTF-8 in encoding/decoding? (Only manage to receive messages when I comment "pipeline.addLast("frameDecoder", new io.netty.handler.codec.DelimiterBasedFrameDecoder( 500000, byteDeli)" )
Sample request:
pov1‹1‹202030‹81056581‹0‹6‹565810000011‹0‹130418135639‹3‹4‹0‹cha7373737›chaE15E2512380›1›1«ban7373737›banE15E2512380›2›2«ind7373737›indE15E2512380›3›3»
Eclipse cosole: Recieved Request ::::::
pov1�1�202030�81056581�0�6�565810000011�0�130418135639�3�4�0�cha7373737�chaE15E2512380�1�1�ban7373737�banE15E2512380�2�2�ind7373737�indE15E2512380�3�3�
Server class:-
public void run() {
try {
System.out.println("2:run");
bootstrap
.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch)
throws Exception {
ChannelPipeline pipeline = ch.pipeline();
DTMTCPServiceHandler serviceHandler = context
.getBean(DTMTCPServiceHandler.class);
pipeline.addFirst(new LoggingHandler(
LogLevel.INFO));
byte[] delimiter = "\u00BB".getBytes(CharsetUtil.UTF_8);//»
ByteBuf byteDeli = Unpooled.copiedBuffer(delimiter);
pipeline.addLast(
"frameDecoder",
new io.netty.handler.codec.DelimiterBasedFrameDecoder(
500000, byteDeli)); // Decoders
pipeline.addLast("stringDecoder",
new StringDecoder(CharsetUtil.UTF_8));
pipeline.addLast("stringEncoder",
new StringEncoder(CharsetUtil.UTF_8));
pipeline.addLast("messageHandler",
serviceHandler);
}
}).option(ChannelOption.SO_BACKLOG, 128)
.childOption(ChannelOption.SO_KEEPALIVE, true);
serverChannel = bootstrap.bind(7070).sync().channel()
.closeFuture().sync().channel();
} catch (InterruptedException e) {
//error
logger.error("POSGatewayServiceThread : InterruptedException",
e);
System.out.println(e);
} finally {
//finally
System.out.println("finally");
serverChannel.close();
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
Handler class
public class DTMTCPServiceHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
String posMessage = msg.toString();
System.out.println("Recieved Request :::::: " + posMessage);
String response = "-";
ByteBuf copy = null;
try {
//Called to separate splitter class
response = dtmtcpServiceManager.manageDTMTCPMessage(posMessage);
copy = Unpooled.copiedBuffer(response.getBytes());
} finally {
logger.info("Recieved Response :::::: " + response);
ctx.write(copy);
ctx.flush();
}
}
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
//Open
super.channelActive(ctx);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
//End
super.channelReadComplete(ctx);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
//exception
ctx.close();
}
}
Found the problem and it is not related to netty. Error is with the JMeter encoding. managed to solve this after modifying the "jmeter.properties" property file #\apache-jmeter-x.xx\bin.
tcp.charset=UTF-8
Sorry to trouble you guys, since false is with me.
I currently have a problem while working with Mina. I am able to create a NIOAcceptor and Connector and connect the client to the server. Upon session being created in the Server, it sends the handshake packet which in turn validates and sends back validation to the server to see if files are up-to-date, etc. The server receives this validation and correctly deciphers the packet and sends the packet to the Client to display the game window. However, after this initial connection, I can no longer send packets to the server via the Client.
ServerHandler:
#Override
public void sessionOpened(IoSession session) {
log.info("[Login] to [" + GameConstants.GAME_NAME + ": IoSession with {} opened", session.getRemoteAddress());
Client c = new Client(session);
connectedClients.add(session.getRemoteAddress().toString());
session.setAttribute(Client.KEY, c);
c.write(PacketCreator.getHandshake());
// c.write(PacketCreator.getPing());
}
#Override
public void messageReceived(IoSession session, Object message) {
PacketReader reader = new PacketReader((byte[]) message);
Client c = (Client) session.getAttribute(Client.KEY);
short header = reader.readShort();
PacketHandler handler = PacketProcessor.getHandler(header);
System.out.println("Received opcode: 0x" + Integer.toHexString(header).toUpperCase());
if (handler != null) {
handler.handlePacket(reader, c);
} else {
log.info("Received opcode: 0x" + Integer.toHexString(header).toUpperCase() + " with no handler.");
}
}
#Override
public void exceptionCaught(IoSession session, Throwable cause) {
System.out.println("session error");
}
#Override
public void sessionClosed(IoSession session) throws Exception {
System.out.println("Session closing: " + session.getRemoteAddress().toString());
connectedClients.remove(session.getRemoteAddress().toString());
Client c = (Client) session.getAttribute(Client.KEY);
if (c != null) {
c.disconnect();
c.dispose();
} else {
log.warn("Client is null in sessionClosed for ip {} when it shouldn't be", session.getRemoteAddress());
}
super.sessionClosed(session);
}
ClientHandler:
#Override
public void sessionOpened(IoSession session) {
System.out.println("Session opened: " + session);
Server s = new Server(session);
session.setAttribute(Server.KEY, s);
s.write(PacketCreator.getPong());
}
#Override
public void messageReceived(IoSession session, Object message) {
PacketReader reader = new PacketReader((byte[]) message);
Server s = (Server) session.getAttribute(Server.KEY);
short header = reader.readShort();
PacketHandler handler = PacketProcessor.getHandler(header);
if (handler != null) {
handler.handlePacket(reader, s);
} else {
log.info("Received opcode: 0x" + Integer.toHexString(header).toUpperCase() + " with no handler.");
}
}
#Override
public void exceptionCaught(IoSession session, Throwable cause) {
System.out.println("session error");
log.error("Exception caught in Server Handler: ", cause);
}
#Override
public void sessionClosed(IoSession session) throws Exception {
// TODO
System.out.println("session closed");
super.sessionClosed(session);
}
Client (NIOConnection class):
public static void connectToServer() throws Throwable {
NioSocketConnector connector = new NioSocketConnector();
connector.setConnectTimeoutMillis(1000 * 30); // 30 seconds
connector.getFilterChain().addLast("codec", new ProtocolCodecFilter(new ObjectSerializationCodecFactory()));
connector.setHandler(new ClientHandler());
IoSession session;
long startTime = System.currentTimeMillis();
for (;;) {
try {
ConnectFuture future = connector.connect(new InetSocketAddress("127.0.0.1", 9494)); // 24.7.142.74
future.awaitUninterruptibly();
session = future.getSession();
break;
} catch (RuntimeIoException e) {
log.error("Failed to connect", e);
Thread.sleep(5000);
}
}
session.getCloseFuture().awaitUninterruptibly();
}
Server (NIOAcceptor class):
private static void initializeLoginServer() {
PacketProcessor.registerHandlers();
acceptor = new NioSocketAcceptor();
// acceptor.getFilterChain().addLast("codec", new ProtocolCodecFilter(new TextLineCodecFactory(Charset.forName("UTF-8"))));// TODO: encoding/decoding packets
acceptor.getFilterChain().addLast("codec", new ProtocolCodecFilter(new ObjectSerializationCodecFactory()));
acceptor.getSessionConfig().setReadBufferSize(2048);
acceptor.getSessionConfig().setIdleTime(IdleStatus.BOTH_IDLE, 10);
acceptor.getSessionConfig().setTcpNoDelay(true);
acceptor.setHandler(new ServerHandler(1));
try {
acceptor.bind(new InetSocketAddress(GameConstants.SERVER_PORT));
} catch (Exception e) {
log.error("Could not bind. ", e);
}
log.info("Login Server: Listening on port " + GameConstants.SERVER_PORT);
}
I have application which uses both TCP and UDP protocols. Main assumption is that the client connects to server via TCP protocol and when connection is established, UDP datagrams are being send.
I have to support two scenarios of connecting to server:
- client connects when server is running
- client connects when server is down and retries connection until server starts again
For the first scenario everything works pretty fine: I got working both connections.
The problem is with second scenario. When client tries few times to connect via TCP and finally connects, the UDP connection function throws an exception:
java.net.SocketException: No buffer space available (maximum connections reached?): bind
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:344)
at sun.nio.ch.DatagramChannelImpl.bind(DatagramChannelImpl.java:684)
at sun.nio.ch.DatagramSocketAdaptor.bind(DatagramSocketAdaptor.java:91)
at io.netty.channel.socket.nio.NioDatagramChannel.doBind(NioDatagramChannel.java:192)
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:484)
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1080)
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:430)
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:415)
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:903)
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:197)
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:350)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:380)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:722)
When I restart client application without doing anything with server, client will connect with any problems.
What can cause a problem?
In below I attach source code of classes. All source code comes from examples placed in official Netty project page. The only thing which I have midified is that I replaced static variables and functions with non-static ones. It was caused that in future I will need many TCP-UDP connections to multiple servers.
public final class UptimeClient {
static final String HOST = System.getProperty("host", "192.168.2.193");
static final int PORT = Integer.parseInt(System.getProperty("port", "2011"));
static final int RECONNECT_DELAY = Integer.parseInt(System.getProperty("reconnectDelay", "5"));
static final int READ_TIMEOUT = Integer.parseInt(System.getProperty("readTimeout", "10"));
private static UptimeClientHandler handler;
public void runClient() throws Exception {
configureBootstrap(new Bootstrap()).connect();
}
private Bootstrap configureBootstrap(Bootstrap b) {
return configureBootstrap(b, new NioEventLoopGroup());
}
#Override
protected Object clone() throws CloneNotSupportedException {
return super.clone(); //To change body of generated methods, choose Tools | Templates.
}
Bootstrap configureBootstrap(Bootstrap b, EventLoopGroup g) {
if(handler == null){
handler = new UptimeClientHandler(this);
}
b.group(g)
.channel(NioSocketChannel.class)
.remoteAddress(HOST, PORT)
.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new IdleStateHandler(READ_TIMEOUT, 0, 0), handler);
}
});
return b;
}
void connect(Bootstrap b) {
b.connect().addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.cause() != null) {
handler.startTime = -1;
handler.println("Failed to connect: " + future.cause());
}
}
});
}
}
#Sharable
public class UptimeClientHandler extends SimpleChannelInboundHandler<Object> {
UptimeClient client;
public UptimeClientHandler(UptimeClient client){
this.client = client;
}
long startTime = -1;
#Override
public void channelActive(ChannelHandlerContext ctx) {
try {
if (startTime < 0) {
startTime = System.currentTimeMillis();
}
println("Connected to: " + ctx.channel().remoteAddress());
new QuoteOfTheMomentClient(null).run();
} catch (Exception ex) {
Logger.getLogger(UptimeClientHandler.class.getName()).log(Level.SEVERE, null, ex);
}
}
#Override
public void channelRead0(ChannelHandlerContext ctx, Object msg) throws Exception {
}
#Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) {
if (!(evt instanceof IdleStateEvent)) {
return;
}
IdleStateEvent e = (IdleStateEvent) evt;
if (e.state() == IdleState.READER_IDLE) {
// The connection was OK but there was no traffic for last period.
println("Disconnecting due to no inbound traffic");
ctx.close();
}
}
#Override
public void channelInactive(final ChannelHandlerContext ctx) {
println("Disconnected from: " + ctx.channel().remoteAddress());
}
#Override
public void channelUnregistered(final ChannelHandlerContext ctx) throws Exception {
println("Sleeping for: " + UptimeClient.RECONNECT_DELAY + 's');
final EventLoop loop = ctx.channel().eventLoop();
loop.schedule(new Runnable() {
#Override
public void run() {
println("Reconnecting to: " + UptimeClient.HOST + ':' + UptimeClient.PORT);
client.connect(client.configureBootstrap(new Bootstrap(), loop));
}
}, UptimeClient.RECONNECT_DELAY, TimeUnit.SECONDS);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
void println(String msg) {
if (startTime < 0) {
System.err.format("[SERVER IS DOWN] %s%n", msg);
} else {
System.err.format("[UPTIME: %5ds] %s%n", (System.currentTimeMillis() - startTime) / 1000, msg);
}
}
}
public final class QuoteOfTheMomentClient {
private ServerData config;
public QuoteOfTheMomentClient(ServerData config){
this.config = config;
}
public void run() throws Exception {
EventLoopGroup group = new NioEventLoopGroup();
try {
Bootstrap b = new Bootstrap();
b.group(group)
.channel(NioDatagramChannel.class)
.option(ChannelOption.SO_BROADCAST, true)
.handler(new QuoteOfTheMomentClientHandler());
Channel ch = b.bind(0).sync().channel();
ch.writeAndFlush(new DatagramPacket(
Unpooled.copiedBuffer("QOTM?", CharsetUtil.UTF_8),
new InetSocketAddress("192.168.2.193", 8193))).sync();
if (!ch.closeFuture().await(5000)) {
System.err.println("QOTM request timed out.");
}
}
catch(Exception ex)
{
ex.printStackTrace();
}
finally {
group.shutdownGracefully();
}
}
}
public class QuoteOfTheMomentClientHandler extends SimpleChannelInboundHandler<DatagramPacket> {
#Override
public void channelRead0(ChannelHandlerContext ctx, DatagramPacket msg) throws Exception {
String response = msg.content().toString(CharsetUtil.UTF_8);
if (response.startsWith("QOTM: ")) {
System.out.println("Quote of the Moment: " + response.substring(6));
ctx.close();
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
If your server is Windows Server 2008 (R2 or R2 SP1), this problem is likely described and solved by this stackoverflow answer which refers to Microsoft KB article #2577795
This issue occurs because of a race condition in the Ancillary Function Driver
for WinSock (Afd.sys) that causes sockets to be leaked. With time, the issue
that is described in the "Symptoms" section occurs if all available socket
resources are exhausted.
If your server is Windows Server 2003, this problem is likely described and solved by this stackoverflow answer which refers to Microsoft KB article #196271
The default maximum number of ephemeral TCP ports is 5000 in the products that
are included in the "Applies to" section. A new parameter has been added in
these products. To increase the maximum number of ephemeral ports, follow these
steps...
...which basically means that you have run out of ephemeral ports.