Netty : Why use ChannelInboundHandlerAdapter in TimeServerHandler? - java

I am going through netty's documentation here and the diagram here.
My question is, the Timeserver is writing time into the socket, for the client to read the time. Shouldn't it use the ChannelOutboundHandlerAdapter ? Why is the logic in ChannelInboundHandlerAdapter ?
Couldn't understand, please explain.
Timeserver,
public class TimeServerHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelActive(final ChannelHandlerContext ctx) { // (1)
final ByteBuf time = ctx.alloc().buffer(4); // (2)
time.writeInt((int) (System.currentTimeMillis() / 1000L + 2208988800L));
final ChannelFuture f = ctx.writeAndFlush(time); // (3)
f.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) {
assert f == future;
ctx.close();
}
}); // (4)
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
TimeClient,
public class TimeClient {
public static void main(String[] args) throws Exception {
String host = args[0];
int port = Integer.parseInt(args[1]);
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
Bootstrap b = new Bootstrap(); // (1)
b.group(workerGroup); // (2)
b.channel(NioSocketChannel.class); // (3)
b.option(ChannelOption.SO_KEEPALIVE, true); // (4)
b.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new TimeClientHandler());
}
});
// Start the client.
ChannelFuture f = b.connect(host, port).sync(); // (5)
// Wait until the connection is closed.
f.channel().closeFuture().sync();
} finally {
workerGroup.shutdownGracefully();
}
}
}

The reason we use a ChannelInboundHandlerAdapter is because, we are writing into the channel which was established by the client to the server. Since its inbound with respect to the server, we use a ChannelInboundHandlerAdapter. The Client connects to the Server, through the channel into which the server sends out the time.

Because your server will respond to incoming messages, it will need to implement interface ChannelInboundHandler, which defines methods for acting on inbound events.
Further, ChannelInboundHandlerAdapter has a straightforward API, and each of its methods can be overridden to hook into the event lifecycle at the appropriate point.
I just started learning Netty, hope this helps. :)

Related

Shutting down Netty server itself

I'm using Netty server to solve this: reading a big file line by line and processing it. Doing it on single machine is still slow, so I've decided to use server to serve chunks of data to clients. That already works, but what I also want is that server shut downs itself when processed whole file. The source code I'm using right now is:
public static void main(String[] args) {
new Thread(() -> {
//reading the big file and populating 'dq' - data queue
}).start();
final EventLoopGroup bGrp = new NioEventLoopGroup(1);
final EventLoopGroup wGrp = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.option(ChannelOption.SO_BACKLOG, 100)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) {
ChannelPipeline p = ch.pipeline();
p.addLast(new StringDecoder());
p.addLast(new StringEncoder());
p.addLast(new ServerHandler(dq, bGrp, wGrp));
}
});
ChannelFuture f = b.bind(PORT).sync();
f.channel().closeFuture().sync();
} finally {
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
class ServerHandler extends ChannelInboundHandlerAdapter {
public ServerHandler(dq, bGrp, wGrp) {
//assigning params to instance fields
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
//creating bulk of data from 'dq' and sending to client
/* e.g.
ctx.write(dq.get());
*/
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
if (dq.isEmpty() /*or other check that file was processed*/ ) {
try {
ctx.channel().closeFuture().sync();
} catch (InterruptedException ie) {
//...
}
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
ctx.close();
ctx.executor().parent().shutdownGracefully();
}
}
Is the server shutdown in channelReadComplete(...) method correct? What I'm afraid is that there can still be another served client (e.g. sending big bulk in other client and with current client reached the end of 'dq').
The base code is from netty EchoServer/DiscardServer examples.
The question is : how to shut down netty server (from handler) when reached specific condition.
Thanks
You can not shutdown a server from a handler. What you could do is signal to a different thread that it should shutdown the server.

SimpleChannelInboundHandler never fires channelRead0

on some day i decided to create a Netty Chat server using Tcp protocol. Currently, it successfully logging connect and disconnect, but channelRead0 in my handler is never fires. I tried Python client.
Netty version: 4.1.6.Final
Handler code:
public class ServerWrapperHandler extends SimpleChannelInboundHandler<String> {
private final TcpServer server;
public ServerWrapperHandler(TcpServer server){
this.server = server;
}
#Override
public void handlerAdded(ChannelHandlerContext ctx) {
System.out.println("Client connected.");
server.addClient(ctx);
}
#Override
public void handlerRemoved(ChannelHandlerContext ctx) {
System.out.println("Client disconnected.");
server.removeClient(ctx);
}
#Override
public void channelRead0(ChannelHandlerContext ctx, String msg) {
System.out.println("Message received.");
server.handleMessage(ctx, msg);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
System.out.println("Read complete.");
super.channelReadComplete(ctx);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
Output:
[TCPServ] Starting on 0.0.0.0:1052
Client connected.
Read complete.
Read complete.
Client disconnected.
Client code:
import socket
conn = socket.socket()
conn.connect(("127.0.0.1", 1052))
conn.send("Hello")
tmp = conn.recv(1024)
while tmp:
data += tmp
tmp = conn.recv(1024)
print(data.decode("utf-8"))
conn.close()
Btw, the problem was in my initializer: i added DelimiterBasedFrameDecoder to my pipeline, and this decoder is stopping the thread. I dont know why, but i dont needed this decoder, so i just deleted it, and everything started to work.
#Override
protected void initChannel(SocketChannel ch) throws Exception {
// Create a default pipeline implementation.
ChannelPipeline pipeline = ch.pipeline();
// Protocol Decoder - translates binary data (e.g. ByteBuf) into a Java object.
// Protocol Encoder - translates a Java object into binary data.
// Add the text line codec combination first,
pipeline.addLast("framer", new DelimiterBasedFrameDecoder(8192, Delimiters.lineDelimiter())); //<--- DELETE THIS
pipeline.addLast("decoder", new StringDecoder());
pipeline.addLast("encoder", new StringEncoder());
pipeline.addLast("handler", new ServerWrapperHandler(tcpServer));
}

using netty to send several RTSP message

I want to create a RTSP client, to send some RTSP message. I use netty to write this, but my code can only send one message. how to send another message?
My client Code like this:
public class RtspClient {
public static class ClientHandler extends SimpleChannelInboundHandler<DefaultHttpResponse> {
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
protected void channelRead0(ChannelHandlerContext ctx, DefaultHttpResponse msg) throws Exception {
System.out.println(msg.toString());
}
}
public static void main(String[] args) throws InterruptedException {
EventLoopGroup workerGroup = new NioEventLoopGroup();
final ClientHandler handler = new ClientHandler();
Bootstrap b = new Bootstrap();
b.group(workerGroup);
b.channel(NioSocketChannel.class);
b.option(ChannelOption.SO_KEEPALIVE, true);
b.remoteAddress("127.0.0.1", 8557);
b.handler(new ChannelInitializer<SocketChannel>() {
protected void initChannel(SocketChannel ch) {
ChannelPipeline p = ch.pipeline();
p.addLast("encoder", new RtspEncoder());
p.addLast("decoder", new RtspDecoder());
p.addLast(handler);
}
});
Channel channel = b.connect().sync().channel();
DefaultHttpRequest request = new DefaultHttpRequest(RtspVersions.RTSP_1_0, RtspMethods.PLAY, "rtsp:123");
request.headers().add(RtspHeaderNames.CSEQ, 1);
request.headers().add(RtspHeaderNames.SESSION, "294");
channel.writeAndFlush(request);
Thread.sleep(10000);
System.out.println(channel.isWritable());
System.out.println(channel.isActive());
request = new DefaultHttpRequest(RtspVersions.RTSP_1_0, RtspMethods.TEARDOWN, "rtsp3");
request.headers().add(RtspHeaderNames.CSEQ, 2);
request.headers().add(RtspHeaderNames.SESSION, "294");
}
channel.writeAndFlush(request);
Scanner sc = new Scanner(System.in);
sc.nextLine();
channel.closeFuture().sync();
}
this code could only send first message. The server did not receive the second data. how to send another message?
You want to use DefaultFullHttpRequest or you need to "terminate" each DefaultHttpRequest with a LastHttpContent.

Echo Server concurrently with 1000 clients (lost messages + error connection)

I am reading "Netty In Action V5". When reading to chapter 2.3 and 2.4, I tried with example EchoServer and EchoClient, when I tested one client connected to server, everything worked perfectly ... then I modified the example to multi clients could connect to server. My purpose was to run a stresstest : 1000 clients would connect to server, and each of client would echo 100 messages to server, and when all clients finished, I would get total time of all of process. Server was deployed on linux machine (VPS), and clients were deployed on window machine.
When run stresstest, I got 2 problems:
Some clients got error message:
java.io.IOException: An existing connection was forcibly closed by the remote host
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at io.netty.buffer.UnpooledUnsafeDirectByteBuf.setBytes(UnpooledUnsafeDirectByteBuf.java:447)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:242)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)\at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
But some clients did not received message from server
Working Enviroment:
Netty-all-4.0.30.Final
JDK1.8.0_25
Echo Clients were deployed on Window 7 Ultimate
Echo Server was deployed on Linux Centos 6
Class NettyClient:
public class NettyClient {
private Bootstrap bootstrap;
private EventLoopGroup group;
public NettyClient(final ChannelInboundHandlerAdapter handler) {
group = new NioEventLoopGroup();
bootstrap = new Bootstrap();
bootstrap.group(group);
bootstrap.channel(NioSocketChannel.class);
bootstrap.handler(new ChannelInitializer<SocketChannel>() {
#Override
protected void initChannel(SocketChannel channel) throws Exception {
channel.pipeline().addLast(handler);
}
});
}
public void start(String host, int port) throws Exception {
bootstrap.remoteAddress(new InetSocketAddress(host, port));
bootstrap.connect();
}
public void stop() {
try {
group.shutdownGracefully().sync();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
Class NettyServer:
public class NettyServer {
private EventLoopGroup parentGroup;
private EventLoopGroup childGroup;
private ServerBootstrap boopstrap;
public NettyServer(final ChannelInboundHandlerAdapter handler) {
parentGroup = new NioEventLoopGroup(300);
childGroup = new NioEventLoopGroup(300);
boopstrap = new ServerBootstrap();
boopstrap.group(parentGroup, childGroup);
boopstrap.channel(NioServerSocketChannel.class);
boopstrap.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
protected void initChannel(SocketChannel channel) throws Exception {
channel.pipeline().addLast(handler);
}
});
}
public void start(int port) throws Exception {
boopstrap.localAddress(new InetSocketAddress(port));
ChannelFuture future = boopstrap.bind().sync();
System.err.println("Start Netty server on port " + port);
future.channel().closeFuture().sync();
}
public void stop() throws Exception {
parentGroup.shutdownGracefully().sync();
childGroup.shutdownGracefully().sync();
}
}
Class EchoClient
public class EchoClient {
private static final String HOST = "203.12.37.22";
private static final int PORT = 3344;
private static final int NUMBER_CONNECTION = 1000;
private static final int NUMBER_ECHO = 10;
private static CountDownLatch counter = new CountDownLatch(NUMBER_CONNECTION);
public static void main(String[] args) throws Exception {
List<NettyClient> listClients = Collections.synchronizedList(new ArrayList<NettyClient>());
for (int i = 0; i < NUMBER_CONNECTION; i++) {
new Thread(new Runnable() {
#Override
public void run() {
try {
NettyClient client = new NettyClient(new EchoClientHandler(NUMBER_ECHO) {
#Override
protected void onFinishEcho() {
counter.countDown();
System.err.println((NUMBER_CONNECTION - counter.getCount()) + "/" + NUMBER_CONNECTION);
}
});
client.start(HOST, PORT);
listClients.add(client);
} catch (Exception ex) {
ex.printStackTrace();
}
}
}).start();
}
long t1 = System.currentTimeMillis();
counter.await();
long t2 = System.currentTimeMillis();
System.err.println("Totla time: " + (t2 - t1));
for (NettyClient client : listClients) {
client.stop();
}
}
private static class EchoClientHandler extends SimpleChannelInboundHandler<ByteBuf> {
private static final String ECHO_MSG = "Echo Echo Echo Echo Echo Echo Echo Echo Echo Echo Echo Echo Echo Echo Echo Echo Echo Echo Echo Echo";
private int numberEcho;
private int curNumberEcho = 0;
public EchoClientHandler(int numberEcho) {
this.numberEcho = numberEcho;
}
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
ctx.writeAndFlush(Unpooled.copiedBuffer(ECHO_MSG, CharsetUtil.UTF_8));
}
#Override
protected void channelRead0(ChannelHandlerContext ctx, ByteBuf in) throws Exception {
curNumberEcho++;
if (curNumberEcho >= numberEcho) {
onFinishEcho();
} else {
ctx.writeAndFlush(Unpooled.copiedBuffer(ECHO_MSG, CharsetUtil.UTF_8));
}
}
protected void onFinishEcho() {
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
cause.printStackTrace();
ctx.close();
}
}
}
Class EchoServer:
public class EchoServer {
private static final int PORT = 3344;
public static void main(String[] args) throws Exception {
NettyServer server = new NettyServer(new EchoServerHandler());
server.start(PORT);
System.err.println("Start server on port " + PORT);
}
#Sharable
private static class EchoServerHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ctx.write(msg);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
ctx.flush();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
ctx.close();
}
}
}
You might change 2 things:
Create only one client bootstrap and reuse it for all your clients instead of creating one per client. So extract your bootstrap build out of the Client part and keep only the connect as you've done in your start. This will limit the number of threads internally.
Close the connection on client side when the number of ping pong is reached. Currently you do only a call to the empty method onFinishEcho, which causes no close at all on client side, so there is no client stopping... And therefore no channel closing too...
You might have reach some limitations on the number of threads on client side.
Also one other element could be an issue: you don't specify any codec (string codec or whatever) which could lead to partial sending from client or server treated as full response however.
For instance you might have a first block of "Echo Echo Echo" sending one packet containing the beginning of your buffer, while the other parts (more "Echo") Will be send through later packets.
To prevent this, you should use one codec to ensure your final handler is getting a real full message, not partial one. If not, you might fall in other issues such as error on the server side trying to send extra packet while the channel would be closed by the client sooner as expected...

Java, Netty, TCP and UDP connection integration : No buffer space available for UDP connection

I have application which uses both TCP and UDP protocols. Main assumption is that the client connects to server via TCP protocol and when connection is established, UDP datagrams are being send.
I have to support two scenarios of connecting to server:
- client connects when server is running
- client connects when server is down and retries connection until server starts again
For the first scenario everything works pretty fine: I got working both connections.
The problem is with second scenario. When client tries few times to connect via TCP and finally connects, the UDP connection function throws an exception:
java.net.SocketException: No buffer space available (maximum connections reached?): bind
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:344)
at sun.nio.ch.DatagramChannelImpl.bind(DatagramChannelImpl.java:684)
at sun.nio.ch.DatagramSocketAdaptor.bind(DatagramSocketAdaptor.java:91)
at io.netty.channel.socket.nio.NioDatagramChannel.doBind(NioDatagramChannel.java:192)
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:484)
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1080)
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:430)
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:415)
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:903)
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:197)
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:350)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:380)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:722)
When I restart client application without doing anything with server, client will connect with any problems.
What can cause a problem?
In below I attach source code of classes. All source code comes from examples placed in official Netty project page. The only thing which I have midified is that I replaced static variables and functions with non-static ones. It was caused that in future I will need many TCP-UDP connections to multiple servers.
public final class UptimeClient {
static final String HOST = System.getProperty("host", "192.168.2.193");
static final int PORT = Integer.parseInt(System.getProperty("port", "2011"));
static final int RECONNECT_DELAY = Integer.parseInt(System.getProperty("reconnectDelay", "5"));
static final int READ_TIMEOUT = Integer.parseInt(System.getProperty("readTimeout", "10"));
private static UptimeClientHandler handler;
public void runClient() throws Exception {
configureBootstrap(new Bootstrap()).connect();
}
private Bootstrap configureBootstrap(Bootstrap b) {
return configureBootstrap(b, new NioEventLoopGroup());
}
#Override
protected Object clone() throws CloneNotSupportedException {
return super.clone(); //To change body of generated methods, choose Tools | Templates.
}
Bootstrap configureBootstrap(Bootstrap b, EventLoopGroup g) {
if(handler == null){
handler = new UptimeClientHandler(this);
}
b.group(g)
.channel(NioSocketChannel.class)
.remoteAddress(HOST, PORT)
.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new IdleStateHandler(READ_TIMEOUT, 0, 0), handler);
}
});
return b;
}
void connect(Bootstrap b) {
b.connect().addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.cause() != null) {
handler.startTime = -1;
handler.println("Failed to connect: " + future.cause());
}
}
});
}
}
#Sharable
public class UptimeClientHandler extends SimpleChannelInboundHandler<Object> {
UptimeClient client;
public UptimeClientHandler(UptimeClient client){
this.client = client;
}
long startTime = -1;
#Override
public void channelActive(ChannelHandlerContext ctx) {
try {
if (startTime < 0) {
startTime = System.currentTimeMillis();
}
println("Connected to: " + ctx.channel().remoteAddress());
new QuoteOfTheMomentClient(null).run();
} catch (Exception ex) {
Logger.getLogger(UptimeClientHandler.class.getName()).log(Level.SEVERE, null, ex);
}
}
#Override
public void channelRead0(ChannelHandlerContext ctx, Object msg) throws Exception {
}
#Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) {
if (!(evt instanceof IdleStateEvent)) {
return;
}
IdleStateEvent e = (IdleStateEvent) evt;
if (e.state() == IdleState.READER_IDLE) {
// The connection was OK but there was no traffic for last period.
println("Disconnecting due to no inbound traffic");
ctx.close();
}
}
#Override
public void channelInactive(final ChannelHandlerContext ctx) {
println("Disconnected from: " + ctx.channel().remoteAddress());
}
#Override
public void channelUnregistered(final ChannelHandlerContext ctx) throws Exception {
println("Sleeping for: " + UptimeClient.RECONNECT_DELAY + 's');
final EventLoop loop = ctx.channel().eventLoop();
loop.schedule(new Runnable() {
#Override
public void run() {
println("Reconnecting to: " + UptimeClient.HOST + ':' + UptimeClient.PORT);
client.connect(client.configureBootstrap(new Bootstrap(), loop));
}
}, UptimeClient.RECONNECT_DELAY, TimeUnit.SECONDS);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
void println(String msg) {
if (startTime < 0) {
System.err.format("[SERVER IS DOWN] %s%n", msg);
} else {
System.err.format("[UPTIME: %5ds] %s%n", (System.currentTimeMillis() - startTime) / 1000, msg);
}
}
}
public final class QuoteOfTheMomentClient {
private ServerData config;
public QuoteOfTheMomentClient(ServerData config){
this.config = config;
}
public void run() throws Exception {
EventLoopGroup group = new NioEventLoopGroup();
try {
Bootstrap b = new Bootstrap();
b.group(group)
.channel(NioDatagramChannel.class)
.option(ChannelOption.SO_BROADCAST, true)
.handler(new QuoteOfTheMomentClientHandler());
Channel ch = b.bind(0).sync().channel();
ch.writeAndFlush(new DatagramPacket(
Unpooled.copiedBuffer("QOTM?", CharsetUtil.UTF_8),
new InetSocketAddress("192.168.2.193", 8193))).sync();
if (!ch.closeFuture().await(5000)) {
System.err.println("QOTM request timed out.");
}
}
catch(Exception ex)
{
ex.printStackTrace();
}
finally {
group.shutdownGracefully();
}
}
}
public class QuoteOfTheMomentClientHandler extends SimpleChannelInboundHandler<DatagramPacket> {
#Override
public void channelRead0(ChannelHandlerContext ctx, DatagramPacket msg) throws Exception {
String response = msg.content().toString(CharsetUtil.UTF_8);
if (response.startsWith("QOTM: ")) {
System.out.println("Quote of the Moment: " + response.substring(6));
ctx.close();
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
If your server is Windows Server 2008 (R2 or R2 SP1), this problem is likely described and solved by this stackoverflow answer which refers to Microsoft KB article #2577795
This issue occurs because of a race condition in the Ancillary Function Driver
for WinSock (Afd.sys) that causes sockets to be leaked. With time, the issue
that is described in the "Symptoms" section occurs if all available socket
resources are exhausted.
If your server is Windows Server 2003, this problem is likely described and solved by this stackoverflow answer which refers to Microsoft KB article #196271
The default maximum number of ephemeral TCP ports is 5000 in the products that
are included in the "Applies to" section. A new parameter has been added in
these products. To increase the maximum number of ephemeral ports, follow these
steps...
...which basically means that you have run out of ephemeral ports.

Categories