Netty not closing channels normaly - java

I'm new with Netty. Now migrating my server, cause of high loads. I read lots of mans, doing everything like in examples, but have a bug: CPU load grows in time and very fast.
Analyzing heap and logs shows me, that channels, which throws exceptions (may be not only them) not closing normally, and keeps in worker's selectors. I got 150+ opened channels with 50 users online.
My code is follows.
Creating server:
networkServer = new ServerBootstrap(new NioServerSocketChannelFactory(bossExec, ioExec, 4));
networkServer.setOption("backlog", 500);
networkServer.setOption("connectTimeoutMillis", 10000);
networkServer.setPipelineFactory(new ServerPipelineFactory());
Channel channel = networkServer.bind(new InetSocketAddress(address, port));
Pipeline factory:
#Override
public ChannelPipeline getPipeline() throws Exception {
PacketFrameDecoder decoder = new PacketFrameDecoder();
PacketFrameEncoder encoder = new PacketFrameEncoder();
return Channels.pipeline(decoder, encoder, new PlayerHandler(decoder, encoder));
}
Decoder:
public class PacketFrameDecoder extends FrameDecoder {
#Override
public void channelClosed(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception {
}
#Override
public void channelDisconnected(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception {
}
#Override
protected Object decode(ChannelHandlerContext arg0, Channel arg1, ChannelBuffer buffer) throws Exception {
try {
buffer.markReaderIndex();
Packet p = // ... doing decoding things
if(p != null)
return p;
// Reset buffer if not success
buffer.resetReaderIndex();
// Reset buffer if not success and got exception of not enough bytes in buffer
} catch(BufferUnderflowException e) {
buffer.resetReaderIndex();
} catch(ArrayIndexOutOfBoundsException aioobe) {
buffer.resetReaderIndex();
}
return null;
}
}
And handler:
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception {
worker = new PlayerWorkerThread(this, e.getChannel());
}
#Override
public void channelDisconnected(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception {
worker.disconnectedFromChannel();
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
if(e.getChannel().isOpen())
worker.acceptPacket((Packet) e.getMessage());
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
Server.logger.log(Level.WARNING, "Exception from downstream", e.getCause());
ctx.getChannel().close();
}
Everything is working very well, except growing cpu load...

Related

SimpleChannelInboundHandler never fires channelRead0

on some day i decided to create a Netty Chat server using Tcp protocol. Currently, it successfully logging connect and disconnect, but channelRead0 in my handler is never fires. I tried Python client.
Netty version: 4.1.6.Final
Handler code:
public class ServerWrapperHandler extends SimpleChannelInboundHandler<String> {
private final TcpServer server;
public ServerWrapperHandler(TcpServer server){
this.server = server;
}
#Override
public void handlerAdded(ChannelHandlerContext ctx) {
System.out.println("Client connected.");
server.addClient(ctx);
}
#Override
public void handlerRemoved(ChannelHandlerContext ctx) {
System.out.println("Client disconnected.");
server.removeClient(ctx);
}
#Override
public void channelRead0(ChannelHandlerContext ctx, String msg) {
System.out.println("Message received.");
server.handleMessage(ctx, msg);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
System.out.println("Read complete.");
super.channelReadComplete(ctx);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
Output:
[TCPServ] Starting on 0.0.0.0:1052
Client connected.
Read complete.
Read complete.
Client disconnected.
Client code:
import socket
conn = socket.socket()
conn.connect(("127.0.0.1", 1052))
conn.send("Hello")
tmp = conn.recv(1024)
while tmp:
data += tmp
tmp = conn.recv(1024)
print(data.decode("utf-8"))
conn.close()
Btw, the problem was in my initializer: i added DelimiterBasedFrameDecoder to my pipeline, and this decoder is stopping the thread. I dont know why, but i dont needed this decoder, so i just deleted it, and everything started to work.
#Override
protected void initChannel(SocketChannel ch) throws Exception {
// Create a default pipeline implementation.
ChannelPipeline pipeline = ch.pipeline();
// Protocol Decoder - translates binary data (e.g. ByteBuf) into a Java object.
// Protocol Encoder - translates a Java object into binary data.
// Add the text line codec combination first,
pipeline.addLast("framer", new DelimiterBasedFrameDecoder(8192, Delimiters.lineDelimiter())); //<--- DELETE THIS
pipeline.addLast("decoder", new StringDecoder());
pipeline.addLast("encoder", new StringEncoder());
pipeline.addLast("handler", new ServerWrapperHandler(tcpServer));
}

i'm confused with the "read" method of the ChannelOutboundHandle in netty framework

I'v got no idea about the "read" method of the ChannelOutboundHandle.
why this method occurred in the ChannelOutboundHandle, what's its for?
cause the outboundHandlers are used to handle the output, they are usually
used to write data to wire.
and also I got a problem when I'm using the "read" method.I wrote a EchoServer by netty. here is my code:
public class EchoServer {
public static void main(String[] args) {
// this handler is sharable
final EchoServerInboundHandler echoServerInboundHandler = new EchoServerInboundHandler();
EventLoopGroup eventLoopGroup = new NioEventLoopGroup();
ServerBootstrap serverBootstrap = new ServerBootstrap();
// setup group
serverBootstrap.group(eventLoopGroup)
// setup channelFactory
.channel(NioServerSocketChannel.class)
// listening port
.localAddress(new InetSocketAddress(8080))
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
protected void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast(echoServerInboundHandler);
pipeline.addLast(new EchoServerOutboundHandler());
}
});
try {
ChannelFuture f = serverBootstrap.bind().sync();
f.addListener(new ChannelFutureListener() {
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
System.out.println(future.channel().localAddress()+" started!");
}
}
});
f.channel().closeFuture().sync();
} catch (InterruptedException e) {
e.printStackTrace();
}finally {
// release all threads
eventLoopGroup.shutdownGracefully().syncUninterruptibly();
}
}
}
public class EchoServerOutboundHandler extends ChannelOutboundHandlerAdapter {
#Override
public void read(ChannelHandlerContext ctx) throws Exception {
System.out.println("read");
//super.read(ctx);
}
#Override
public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) throws Exception {
System.out.println("msg:"+msg);
super.write(ctx, msg, promise);
}
#Override
public void flush(ChannelHandlerContext ctx) throws Exception {
super.flush(ctx);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
cause.printStackTrace();
ctx.close();
}
}
#ChannelHandler.Sharable
public class EchoServerInboundHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
ctx.writeAndFlush(msg);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
cause.printStackTrace();
ctx.close();
}
#Override
public void channelActive(ChannelHandlerContext ctx) {
System.out.println("client:"+ctx.channel().remoteAddress()+" connected!");
}
#Override
public void channelInactive(ChannelHandlerContext ctx) {
System.out.println("client:"+ctx.channel().remoteAddress()+" disconnected!");
}
#Override
public void channelRegistered(ChannelHandlerContext ctx) throws Exception {
System.out.println("EchoServerInboundHandler registered");
super.channelRegistered(ctx);
}
#Override
public void channelUnregistered(ChannelHandlerContext ctx) throws Exception {
System.out.println("EchoServerInboundHandler unregistered");
super.channelUnregistered(ctx);
}
}
as U can see, I comment the super.read(ctx); in the read method of the EchoServerOutboundHandler which cause a problem that it can't write data to the client. and also I found that this method will be called when a client established a connection.
At the very least, you need to provide the same code that the parent object would do in the super() method, which is ctx.read()
Which is what the JavaDoc says...
Calls ChannelHandlerContext.read() ...
Otherwise, all you're doing is printing something on your own, and not using Netty to handle anything.

Netty TCP server characters become garbage

I have assigned to do TCP server in my organization to receive text message and split them. But unfortunately some of my
message characters become garbage (I have used JMeter as my TCP client). I have 2 questions related to this problem. Any help is highly appreciated.
Why we can not split my message using "»" (u00BB) character? It never worked and how we could use "»" as delimiter in DelimiterBasedFrameDecoder?
Why we receive garbage characters although I used UTF-8 in encoding/decoding? (Only manage to receive messages when I comment "pipeline.addLast("frameDecoder", new io.netty.handler.codec.DelimiterBasedFrameDecoder( 500000, byteDeli)" )
Sample request:
pov1‹1‹202030‹81056581‹0‹6‹565810000011‹0‹130418135639‹3‹4‹0‹cha7373737›chaE15E2512380›1›1«ban7373737›banE15E2512380›2›2«ind7373737›indE15E2512380›3›3»
Eclipse cosole: Recieved Request ::::::
pov1�1�202030�81056581�0�6�565810000011�0�130418135639�3�4�0�cha7373737�chaE15E2512380�1�1�ban7373737�banE15E2512380�2�2�ind7373737�indE15E2512380�3�3�
Server class:-
public void run() {
try {
System.out.println("2:run");
bootstrap
.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch)
throws Exception {
ChannelPipeline pipeline = ch.pipeline();
DTMTCPServiceHandler serviceHandler = context
.getBean(DTMTCPServiceHandler.class);
pipeline.addFirst(new LoggingHandler(
LogLevel.INFO));
byte[] delimiter = "\u00BB".getBytes(CharsetUtil.UTF_8);//»
ByteBuf byteDeli = Unpooled.copiedBuffer(delimiter);
pipeline.addLast(
"frameDecoder",
new io.netty.handler.codec.DelimiterBasedFrameDecoder(
500000, byteDeli)); // Decoders
pipeline.addLast("stringDecoder",
new StringDecoder(CharsetUtil.UTF_8));
pipeline.addLast("stringEncoder",
new StringEncoder(CharsetUtil.UTF_8));
pipeline.addLast("messageHandler",
serviceHandler);
}
}).option(ChannelOption.SO_BACKLOG, 128)
.childOption(ChannelOption.SO_KEEPALIVE, true);
serverChannel = bootstrap.bind(7070).sync().channel()
.closeFuture().sync().channel();
} catch (InterruptedException e) {
//error
logger.error("POSGatewayServiceThread : InterruptedException",
e);
System.out.println(e);
} finally {
//finally
System.out.println("finally");
serverChannel.close();
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
Handler class
public class DTMTCPServiceHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
String posMessage = msg.toString();
System.out.println("Recieved Request :::::: " + posMessage);
String response = "-";
ByteBuf copy = null;
try {
//Called to separate splitter class
response = dtmtcpServiceManager.manageDTMTCPMessage(posMessage);
copy = Unpooled.copiedBuffer(response.getBytes());
} finally {
logger.info("Recieved Response :::::: " + response);
ctx.write(copy);
ctx.flush();
}
}
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
//Open
super.channelActive(ctx);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
//End
super.channelReadComplete(ctx);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
//exception
ctx.close();
}
}
Found the problem and it is not related to netty. Error is with the JMeter encoding. managed to solve this after modifying the "jmeter.properties" property file #\apache-jmeter-x.xx\bin.
tcp.charset=UTF-8
Sorry to trouble you guys, since false is with me.

Java, Netty, TCP and UDP connection integration : No buffer space available for UDP connection

I have application which uses both TCP and UDP protocols. Main assumption is that the client connects to server via TCP protocol and when connection is established, UDP datagrams are being send.
I have to support two scenarios of connecting to server:
- client connects when server is running
- client connects when server is down and retries connection until server starts again
For the first scenario everything works pretty fine: I got working both connections.
The problem is with second scenario. When client tries few times to connect via TCP and finally connects, the UDP connection function throws an exception:
java.net.SocketException: No buffer space available (maximum connections reached?): bind
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:344)
at sun.nio.ch.DatagramChannelImpl.bind(DatagramChannelImpl.java:684)
at sun.nio.ch.DatagramSocketAdaptor.bind(DatagramSocketAdaptor.java:91)
at io.netty.channel.socket.nio.NioDatagramChannel.doBind(NioDatagramChannel.java:192)
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:484)
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1080)
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:430)
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:415)
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:903)
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:197)
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:350)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:380)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:722)
When I restart client application without doing anything with server, client will connect with any problems.
What can cause a problem?
In below I attach source code of classes. All source code comes from examples placed in official Netty project page. The only thing which I have midified is that I replaced static variables and functions with non-static ones. It was caused that in future I will need many TCP-UDP connections to multiple servers.
public final class UptimeClient {
static final String HOST = System.getProperty("host", "192.168.2.193");
static final int PORT = Integer.parseInt(System.getProperty("port", "2011"));
static final int RECONNECT_DELAY = Integer.parseInt(System.getProperty("reconnectDelay", "5"));
static final int READ_TIMEOUT = Integer.parseInt(System.getProperty("readTimeout", "10"));
private static UptimeClientHandler handler;
public void runClient() throws Exception {
configureBootstrap(new Bootstrap()).connect();
}
private Bootstrap configureBootstrap(Bootstrap b) {
return configureBootstrap(b, new NioEventLoopGroup());
}
#Override
protected Object clone() throws CloneNotSupportedException {
return super.clone(); //To change body of generated methods, choose Tools | Templates.
}
Bootstrap configureBootstrap(Bootstrap b, EventLoopGroup g) {
if(handler == null){
handler = new UptimeClientHandler(this);
}
b.group(g)
.channel(NioSocketChannel.class)
.remoteAddress(HOST, PORT)
.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new IdleStateHandler(READ_TIMEOUT, 0, 0), handler);
}
});
return b;
}
void connect(Bootstrap b) {
b.connect().addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.cause() != null) {
handler.startTime = -1;
handler.println("Failed to connect: " + future.cause());
}
}
});
}
}
#Sharable
public class UptimeClientHandler extends SimpleChannelInboundHandler<Object> {
UptimeClient client;
public UptimeClientHandler(UptimeClient client){
this.client = client;
}
long startTime = -1;
#Override
public void channelActive(ChannelHandlerContext ctx) {
try {
if (startTime < 0) {
startTime = System.currentTimeMillis();
}
println("Connected to: " + ctx.channel().remoteAddress());
new QuoteOfTheMomentClient(null).run();
} catch (Exception ex) {
Logger.getLogger(UptimeClientHandler.class.getName()).log(Level.SEVERE, null, ex);
}
}
#Override
public void channelRead0(ChannelHandlerContext ctx, Object msg) throws Exception {
}
#Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) {
if (!(evt instanceof IdleStateEvent)) {
return;
}
IdleStateEvent e = (IdleStateEvent) evt;
if (e.state() == IdleState.READER_IDLE) {
// The connection was OK but there was no traffic for last period.
println("Disconnecting due to no inbound traffic");
ctx.close();
}
}
#Override
public void channelInactive(final ChannelHandlerContext ctx) {
println("Disconnected from: " + ctx.channel().remoteAddress());
}
#Override
public void channelUnregistered(final ChannelHandlerContext ctx) throws Exception {
println("Sleeping for: " + UptimeClient.RECONNECT_DELAY + 's');
final EventLoop loop = ctx.channel().eventLoop();
loop.schedule(new Runnable() {
#Override
public void run() {
println("Reconnecting to: " + UptimeClient.HOST + ':' + UptimeClient.PORT);
client.connect(client.configureBootstrap(new Bootstrap(), loop));
}
}, UptimeClient.RECONNECT_DELAY, TimeUnit.SECONDS);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
void println(String msg) {
if (startTime < 0) {
System.err.format("[SERVER IS DOWN] %s%n", msg);
} else {
System.err.format("[UPTIME: %5ds] %s%n", (System.currentTimeMillis() - startTime) / 1000, msg);
}
}
}
public final class QuoteOfTheMomentClient {
private ServerData config;
public QuoteOfTheMomentClient(ServerData config){
this.config = config;
}
public void run() throws Exception {
EventLoopGroup group = new NioEventLoopGroup();
try {
Bootstrap b = new Bootstrap();
b.group(group)
.channel(NioDatagramChannel.class)
.option(ChannelOption.SO_BROADCAST, true)
.handler(new QuoteOfTheMomentClientHandler());
Channel ch = b.bind(0).sync().channel();
ch.writeAndFlush(new DatagramPacket(
Unpooled.copiedBuffer("QOTM?", CharsetUtil.UTF_8),
new InetSocketAddress("192.168.2.193", 8193))).sync();
if (!ch.closeFuture().await(5000)) {
System.err.println("QOTM request timed out.");
}
}
catch(Exception ex)
{
ex.printStackTrace();
}
finally {
group.shutdownGracefully();
}
}
}
public class QuoteOfTheMomentClientHandler extends SimpleChannelInboundHandler<DatagramPacket> {
#Override
public void channelRead0(ChannelHandlerContext ctx, DatagramPacket msg) throws Exception {
String response = msg.content().toString(CharsetUtil.UTF_8);
if (response.startsWith("QOTM: ")) {
System.out.println("Quote of the Moment: " + response.substring(6));
ctx.close();
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
If your server is Windows Server 2008 (R2 or R2 SP1), this problem is likely described and solved by this stackoverflow answer which refers to Microsoft KB article #2577795
This issue occurs because of a race condition in the Ancillary Function Driver
for WinSock (Afd.sys) that causes sockets to be leaked. With time, the issue
that is described in the "Symptoms" section occurs if all available socket
resources are exhausted.
If your server is Windows Server 2003, this problem is likely described and solved by this stackoverflow answer which refers to Microsoft KB article #196271
The default maximum number of ephemeral TCP ports is 5000 in the products that
are included in the "Applies to" section. A new parameter has been added in
these products. To increase the maximum number of ephemeral ports, follow these
steps...
...which basically means that you have run out of ephemeral ports.

Netty Channels Randomly closing

My problem is that, at random times when several channels are connected to my game, there will be a disconnection causing all of the channels to be closed. The channelDisconnected method is called, and I've done a stacktrace printout of this occurrence and everything seems normal. My code is below; what is the problem?
public final class ServerChannelHandler extends SimpleChannelHandler {
private static ChannelGroup channels;
private static ServerBootstrap bootstrap;
public static final void init() {
new ServerChannelHandler();
}
/**
* Gets the amount of channels that are currently connected to us
* #return A {#code Integer} {#code Object}
*/
public static int getConnectedChannelsSize() {
return channels == null ? 0 : channels.size();
}
/**
* Creates a new private constructor of this class
*/
private ServerChannelHandler() {
channels = new DefaultChannelGroup();
bootstrap = new ServerBootstrap(new NioServerSocketChannelFactory(CoresManager.serverBossChannelExecutor, CoresManager.serverWorkerChannelExecutor, CoresManager.serverWorkersCount));
bootstrap.getPipeline().addLast("handler", this);
bootstrap.setOption("reuseAddress", true); // reuses adress for bind
bootstrap.setOption("child.tcpNoDelay", true);
bootstrap.setOption("child.TcpAckFrequency", true);
bootstrap.setOption("child.keepAlive", true);
bootstrap.bind(new InetSocketAddress(Constants.PORT_ID));
}
/**
* What happens when a new channel is open and connects to us
*/
#Override
public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent e) {
channels.add(e.getChannel());
}
/**
* What happens when an open channel closes the connection with us
*/
#Override
public void channelClosed(ChannelHandlerContext ctx, ChannelStateEvent e) {
channels.remove(e.getChannel());
}
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) {
ctx.setAttachment(new Session(e.getChannel()));
}
#Override
public void channelDisconnected(ChannelHandlerContext ctx, ChannelStateEvent e) {
Object sessionObject = ctx.getAttachment();
if (sessionObject != null && sessionObject instanceof Session) {
Session session = (Session) sessionObject;
if (session.getDecoder() == null)
return;
if (session.getDecoder() instanceof WorldPacketsDecoder) {
session.getWorldPackets().getPlayer().finish();
}
}
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
if (!(e.getMessage() instanceof ChannelBuffer))
return;
Object sessionObject = ctx.getAttachment();
if (sessionObject != null && sessionObject instanceof Session) {
Session session = (Session) sessionObject;
if (session.getDecoder() == null)
return;
ChannelBuffer buf = (ChannelBuffer) e.getMessage();
buf.markReaderIndex();
int avail = buf.readableBytes();
if (avail < 1 || avail > Constants.RECEIVE_DATA_LIMIT) {
return;
}
byte[] buffer = new byte[avail];
buf.readBytes(buffer);
try {
session.getDecoder().decode(new InputStream(buffer));
} catch (Throwable er) {
er.printStackTrace();
}
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent ee) throws Exception {
ee.getCause().printStackTrace();
}
/**
* On server shutdown, all channels are closed and the external resources
* are released from the {{#link #bootstrap}
*/
public static final void shutdown() {
channels.close().awaitUninterruptibly();
bootstrap.releaseExternalResources();
}
}
StackTrace:
java.lang.Exception: Stack trace
at java.lang.Thread.dumpStack(Thread.java:1364)
at com.sallesy.game.player.Player.realFinish(Player.java:854)
at com.sallesy.game.player.Player.finish(Player.java:848)
at com.sallesy.game.player.Player.finish(Player.java:815)
at com.sallesy.networking.ServerChannelHandler.channelDisconnected(ServerChannelHandler.java:71)
at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:120)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:553)
at org.jboss.netty.channel.Channels.fireChannelDisconnected(Channels.java:399)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.close(AbstractNioWorker.java:721)
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:111)
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:66)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:774)
at org.jboss.netty.channel.SimpleChannelHandler.closeRequested(SimpleChannelHandler.java:338)
at org.jboss.netty.channel.SimpleChannelHandler.handleDownstream(SimpleChannelHandler.java:260)
at org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:585)
at org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:576)
at org.jboss.netty.channel.Channels.close(Channels.java:820)
at org.jboss.netty.channel.AbstractChannel.close(AbstractChannel.java:197)
at org.jboss.netty.channel.ChannelFutureListener$1.operationComplete(ChannelFutureListener.java:41)
at org.jboss.netty.channel.DefaultChannelFuture.notifyListener(DefaultChannelFuture.java:428)
at org.jboss.netty.channel.DefaultChannelFuture.addListener(DefaultChannelFuture.java:145)
at com.sallesy.networking.encoders.WorldPacketsEncoder.sendLogout(WorldPacketsEncoder.java:1179)
at com.sallesy.game.player.Player.logout(Player.java:801)
at com.sallesy.networking.decoders.handlers.ButtonHandler.handleButtons(ButtonHandler.java:227)
at com.sallesy.networking.decoders.WorldPacketsDecoder.processPackets(WorldPacketsDecoder.java:1122)
at com.sallesy.networking.decoders.WorldPacketsDecoder.decode(WorldPacketsDecoder.java:297)
at com.sallesy.networking.ServerChannelHandler.messageReceived(ServerChannelHandler.java:100)
at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:553)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:84)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:471)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:332)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Bit of a guess really but could be keepalive timeouts.
You could try using an IdleStateHandler to keep connections alive.

Categories