I have assigned to do TCP server in my organization to receive text message and split them. But unfortunately some of my
message characters become garbage (I have used JMeter as my TCP client). I have 2 questions related to this problem. Any help is highly appreciated.
Why we can not split my message using "»" (u00BB) character? It never worked and how we could use "»" as delimiter in DelimiterBasedFrameDecoder?
Why we receive garbage characters although I used UTF-8 in encoding/decoding? (Only manage to receive messages when I comment "pipeline.addLast("frameDecoder", new io.netty.handler.codec.DelimiterBasedFrameDecoder( 500000, byteDeli)" )
Sample request:
pov1‹1‹202030‹81056581‹0‹6‹565810000011‹0‹130418135639‹3‹4‹0‹cha7373737›chaE15E2512380›1›1«ban7373737›banE15E2512380›2›2«ind7373737›indE15E2512380›3›3»
Eclipse cosole: Recieved Request ::::::
pov1�1�202030�81056581�0�6�565810000011�0�130418135639�3�4�0�cha7373737�chaE15E2512380�1�1�ban7373737�banE15E2512380�2�2�ind7373737�indE15E2512380�3�3�
Server class:-
public void run() {
try {
System.out.println("2:run");
bootstrap
.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch)
throws Exception {
ChannelPipeline pipeline = ch.pipeline();
DTMTCPServiceHandler serviceHandler = context
.getBean(DTMTCPServiceHandler.class);
pipeline.addFirst(new LoggingHandler(
LogLevel.INFO));
byte[] delimiter = "\u00BB".getBytes(CharsetUtil.UTF_8);//»
ByteBuf byteDeli = Unpooled.copiedBuffer(delimiter);
pipeline.addLast(
"frameDecoder",
new io.netty.handler.codec.DelimiterBasedFrameDecoder(
500000, byteDeli)); // Decoders
pipeline.addLast("stringDecoder",
new StringDecoder(CharsetUtil.UTF_8));
pipeline.addLast("stringEncoder",
new StringEncoder(CharsetUtil.UTF_8));
pipeline.addLast("messageHandler",
serviceHandler);
}
}).option(ChannelOption.SO_BACKLOG, 128)
.childOption(ChannelOption.SO_KEEPALIVE, true);
serverChannel = bootstrap.bind(7070).sync().channel()
.closeFuture().sync().channel();
} catch (InterruptedException e) {
//error
logger.error("POSGatewayServiceThread : InterruptedException",
e);
System.out.println(e);
} finally {
//finally
System.out.println("finally");
serverChannel.close();
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
Handler class
public class DTMTCPServiceHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
String posMessage = msg.toString();
System.out.println("Recieved Request :::::: " + posMessage);
String response = "-";
ByteBuf copy = null;
try {
//Called to separate splitter class
response = dtmtcpServiceManager.manageDTMTCPMessage(posMessage);
copy = Unpooled.copiedBuffer(response.getBytes());
} finally {
logger.info("Recieved Response :::::: " + response);
ctx.write(copy);
ctx.flush();
}
}
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
//Open
super.channelActive(ctx);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
//End
super.channelReadComplete(ctx);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
//exception
ctx.close();
}
}
Found the problem and it is not related to netty. Error is with the JMeter encoding. managed to solve this after modifying the "jmeter.properties" property file #\apache-jmeter-x.xx\bin.
tcp.charset=UTF-8
Sorry to trouble you guys, since false is with me.
Related
I'm using Netty server to solve this: reading a big file line by line and processing it. Doing it on single machine is still slow, so I've decided to use server to serve chunks of data to clients. That already works, but what I also want is that server shut downs itself when processed whole file. The source code I'm using right now is:
public static void main(String[] args) {
new Thread(() -> {
//reading the big file and populating 'dq' - data queue
}).start();
final EventLoopGroup bGrp = new NioEventLoopGroup(1);
final EventLoopGroup wGrp = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.option(ChannelOption.SO_BACKLOG, 100)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) {
ChannelPipeline p = ch.pipeline();
p.addLast(new StringDecoder());
p.addLast(new StringEncoder());
p.addLast(new ServerHandler(dq, bGrp, wGrp));
}
});
ChannelFuture f = b.bind(PORT).sync();
f.channel().closeFuture().sync();
} finally {
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
class ServerHandler extends ChannelInboundHandlerAdapter {
public ServerHandler(dq, bGrp, wGrp) {
//assigning params to instance fields
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
//creating bulk of data from 'dq' and sending to client
/* e.g.
ctx.write(dq.get());
*/
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
if (dq.isEmpty() /*or other check that file was processed*/ ) {
try {
ctx.channel().closeFuture().sync();
} catch (InterruptedException ie) {
//...
}
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
ctx.close();
ctx.executor().parent().shutdownGracefully();
}
}
Is the server shutdown in channelReadComplete(...) method correct? What I'm afraid is that there can still be another served client (e.g. sending big bulk in other client and with current client reached the end of 'dq').
The base code is from netty EchoServer/DiscardServer examples.
The question is : how to shut down netty server (from handler) when reached specific condition.
Thanks
You can not shutdown a server from a handler. What you could do is signal to a different thread that it should shutdown the server.
on some day i decided to create a Netty Chat server using Tcp protocol. Currently, it successfully logging connect and disconnect, but channelRead0 in my handler is never fires. I tried Python client.
Netty version: 4.1.6.Final
Handler code:
public class ServerWrapperHandler extends SimpleChannelInboundHandler<String> {
private final TcpServer server;
public ServerWrapperHandler(TcpServer server){
this.server = server;
}
#Override
public void handlerAdded(ChannelHandlerContext ctx) {
System.out.println("Client connected.");
server.addClient(ctx);
}
#Override
public void handlerRemoved(ChannelHandlerContext ctx) {
System.out.println("Client disconnected.");
server.removeClient(ctx);
}
#Override
public void channelRead0(ChannelHandlerContext ctx, String msg) {
System.out.println("Message received.");
server.handleMessage(ctx, msg);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
System.out.println("Read complete.");
super.channelReadComplete(ctx);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
Output:
[TCPServ] Starting on 0.0.0.0:1052
Client connected.
Read complete.
Read complete.
Client disconnected.
Client code:
import socket
conn = socket.socket()
conn.connect(("127.0.0.1", 1052))
conn.send("Hello")
tmp = conn.recv(1024)
while tmp:
data += tmp
tmp = conn.recv(1024)
print(data.decode("utf-8"))
conn.close()
Btw, the problem was in my initializer: i added DelimiterBasedFrameDecoder to my pipeline, and this decoder is stopping the thread. I dont know why, but i dont needed this decoder, so i just deleted it, and everything started to work.
#Override
protected void initChannel(SocketChannel ch) throws Exception {
// Create a default pipeline implementation.
ChannelPipeline pipeline = ch.pipeline();
// Protocol Decoder - translates binary data (e.g. ByteBuf) into a Java object.
// Protocol Encoder - translates a Java object into binary data.
// Add the text line codec combination first,
pipeline.addLast("framer", new DelimiterBasedFrameDecoder(8192, Delimiters.lineDelimiter())); //<--- DELETE THIS
pipeline.addLast("decoder", new StringDecoder());
pipeline.addLast("encoder", new StringEncoder());
pipeline.addLast("handler", new ServerWrapperHandler(tcpServer));
}
I am trying to implement a TCP server in java using Netty. I am able to handle message of length < 1024 correctly but when I receive message more than 1024, I am only able to see partial message.
I did some research, found that I should implement replayingdecoder but I am unable to understand how to implement the decode method
My message uses JSON
Netty version 4.1.27
protected void decode(ChannelHandlerContext channelHandlerContext, ByteBuf byteBuf, List<Object> list) throws Exception
My Server setup
EventLoopGroup group;
group = new NioEventLoopGroup(this.numThreads);
try {
ServerBootstrap serverBootstrap;
RequestHandler requestHandler;
ChannelFuture channelFuture;
serverBootstrap = new ServerBootstrap();
serverBootstrap.group(group);
serverBootstrap.channel(NioServerSocketChannel.class);
serverBootstrap.localAddress(new InetSocketAddress("::", this.port));
requestHandler = new RequestHandler(this.responseManager, this.logger);
serverBootstrap.childHandler(new ChannelInitializer<SocketChannel>() {
protected void initChannel(SocketChannel socketChannel) throws Exception {
socketChannel.pipeline().addLast(requestHandler);
}
});
channelFuture = serverBootstrap.bind().sync();
channelFuture.channel().closeFuture().sync();
}
catch(Exception e){
this.logger.info(String.format("Unknown failure %s", e.getMessage()));
}
finally {
try {
group.shutdownGracefully().sync();
}
catch (InterruptedException e) {
this.logger.info(String.format("Error shutting down %s", e.getMessage()));
}
}
My current request handler
package me.chirag7jain.Response;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.ChannelFutureListener;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;
import io.netty.util.CharsetUtil;
import org.apache.logging.log4j.Logger;
import java.net.InetSocketAddress;
public class RequestHandler extends ChannelInboundHandlerAdapter {
private ResponseManager responseManager;
private Logger logger;
public RequestHandler(ResponseManager responseManager, Logger logger) {
this.responseManager = responseManager;
this.logger = logger;
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ByteBuf byteBuf;
String data, hostAddress;
byteBuf = (ByteBuf) msg;
data = byteBuf.toString(CharsetUtil.UTF_8);
hostAddress = ((InetSocketAddress) ctx.channel().remoteAddress()).getAddress().getHostAddress();
if (!data.isEmpty()) {
String reply;
this.logger.info(String.format("Data received %s from %s", data, hostAddress));
reply = this.responseManager.reply(data);
if (reply != null) {
ctx.write(Unpooled.copiedBuffer(reply, CharsetUtil.UTF_8));
}
}
else {
logger.info(String.format("NO Data received from %s", hostAddress));
}
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
ctx.writeAndFlush(Unpooled.EMPTY_BUFFER).addListener(ChannelFutureListener.CLOSE);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
this.logger.info(String.format("Received Exception %s", cause.getMessage()));
ctx.close();
}
I would accept data in channelRead() and accumulate it in a buffer. Before return from channelRead() I would invoke read() on a Channel. You may need to record other data, as per your needs.
When netty invokes channelReadComplete(), there is a moment to send whole buffer to your ResponseManager.
Channel read(): Request to Read data from the Channel into the first
inbound buffer, triggers an
ChannelInboundHandler.channelRead(ChannelHandlerContext, Object) event
if data was read, and triggers a channelReadComplete event so the
handler can decide to continue reading.
Your Channel object is accessible by ctx.channel().
Try this code:
private final AttributeKey<StringBuffer> dataKey = AttributeKey.valueOf("dataBuf");
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ByteBuf byteBuf;
String data, hostAddress;
StringBuffer dataBuf = ctx.attr(dataKey).get();
boolean allocBuf = dataBuf == null;
if (allocBuf) dataBuf = new StringBuffer();
byteBuf = (ByteBuf) msg;
data = byteBuf.toString(CharsetUtil.UTF_8);
hostAddress = ((InetSocketAddress) ctx.channel().remoteAddress()).getAddress().getHostAddress();
if (!data.isEmpty()) {
this.logger.info(String.format("Data received %s from %s", data, hostAddress));
}
else {
logger.info(String.format("NO Data received from %s", hostAddress));
}
dataBuf.append(data);
if (allocBuf) ctx.attr(dataKey).set(dataBuf);
ctx.channel().read();
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
StringBuffer dataBuf = ctx.attr(dataKey).get();
if (dataBuf != null) {
String reply;
reply = this.responseManager.reply(dataBuf.toString());
if (reply != null) {
ctx.write(Unpooled.copiedBuffer(reply, CharsetUtil.UTF_8));
}
}
ctx.attr(dataKey).set(null);
ctx.writeAndFlush(Unpooled.EMPTY_BUFFER).addListener(ChannelFutureListener.CLOSE);
}
An application protocol with variable length messages must have:
a length word
a terminator character or sequence, which in turn implies an escape character in case the data contains the terminator
a self-describing protocol such as XML.
I am making a Curl post curl -X POST -d "dsds" 10.0.0.211:5201 to my Netty socket server but in my ChannelRead when I try to cast Object msg into FullHttpRequest It throws following exception.
java.lang.ClassCastException: io.netty.buffer.SimpleLeakAwareByteBuf cannot be cast to io.netty.handler.codec.http.FullHttpRequest
at edu.clemson.openflow.sos.host.netty.HostPacketHandler.channelRead(HostPacketHandler.java:42)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1320)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:905)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:563)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:504)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:418)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:390)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
at java.lang.Thread.run(Thread.java:748)
Following is my Socket Handler class
#ChannelHandler.Sharable
public class HostPacketHandler extends ChannelInboundHandlerAdapter {
private static final Logger log = LoggerFactory.getLogger(HostPacketHandler.class);
private RequestParser request;
public HostPacketHandler(RequestParser request) {
this.request = request;
log.info("Expecting Host at IP {} Port {}",
request.getClientIP(), request.getClientPort());
}
public void setRequestObject(RequestParser requestObject) {
this.request = requestObject;
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
// Discard the received data silently.
InetSocketAddress socketAddress = (InetSocketAddress) ctx.channel().remoteAddress();
log.info("Got Message from {} at Port {}",
socketAddress.getHostName(),
socketAddress.getPort());
//FullHttpRequest request = (FullHttpRequest) msg;
log.info(msg.getClass().getSimpleName());
//((ByteBuf) msg).release();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
// Close the connection when an exception is raised.
cause.printStackTrace();
ctx.close();
}
}
Pipeline:
public class NettyHostSocketServer implements IClientSocketServer {
protected static boolean isClientHandlerRunning = false;
private static final Logger log = LoggerFactory.getLogger(SocketManager.class);
private static final int CLIENT_DATA_PORT = 9877;
private static final int MAX_CLIENTS = 5;
private HostPacketHandler hostPacketHandler;
public NettyHostSocketServer(RequestParser request) {
hostPacketHandler = new HostPacketHandler(request);
}
private boolean startSocket(int port) {
NioEventLoopGroup group = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(group)
.channel(NioServerSocketChannel.class)
.localAddress(new InetSocketAddress(port))
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch)
throws Exception {
ch.pipeline().addLast(
hostPacketHandler);
}
});
ChannelFuture f = b.bind().sync();
log.info("Started host-side socket server at Port {}",CLIENT_DATA_PORT);
return true;
// Need to do socket closing handling. close all the remaining open sockets
//System.out.println(EchoServer.class.getName() + " started and listen on " + f.channel().localAddress());
//f.channel().closeFuture().sync();
} catch (InterruptedException e) {
log.error("Error starting host-side socket");
e.printStackTrace();
return false;
} finally {
//group.shutdownGracefully().sync();
}
}
#Override
public boolean start() {
if (!isClientHandlerRunning) {
isClientHandlerRunning = true;
return startSocket(CLIENT_DATA_PORT);
}
return true;
}
#Override
public int getActiveConnections() {
return 0;
}
}
I also used wireshark to check If I am getting valid packets or not. Below is the screenshot of Wireshark dump.
Your problem is that you never decode the ByteBuf into an actual HttpRequest object which is why you get an error. You can't cast a ByteBuf to a FullHttpRequest object.
You should do something like this:
#Override
public void initChannel(Channel channel) throws Exception {
channel.pipeline().addLast(new HttpRequestDecoder()) // Decodes the ByteBuf into a HttpMessage and HttpContent (1)
.addLast(new HttpObjectAggregator(1048576)) // Aggregates the HttpMessage with its following HttpContent into a FullHttpRequest
.addLast(hostPacketHandler);
}
(1) If you also want to send HttpResponse use this handler HttpServerCodec which adds the HttpRequestDecoder and HttpResponseEncoder.
I have application which uses both TCP and UDP protocols. Main assumption is that the client connects to server via TCP protocol and when connection is established, UDP datagrams are being send.
I have to support two scenarios of connecting to server:
- client connects when server is running
- client connects when server is down and retries connection until server starts again
For the first scenario everything works pretty fine: I got working both connections.
The problem is with second scenario. When client tries few times to connect via TCP and finally connects, the UDP connection function throws an exception:
java.net.SocketException: No buffer space available (maximum connections reached?): bind
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:344)
at sun.nio.ch.DatagramChannelImpl.bind(DatagramChannelImpl.java:684)
at sun.nio.ch.DatagramSocketAdaptor.bind(DatagramSocketAdaptor.java:91)
at io.netty.channel.socket.nio.NioDatagramChannel.doBind(NioDatagramChannel.java:192)
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:484)
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1080)
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:430)
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:415)
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:903)
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:197)
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:350)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:380)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:722)
When I restart client application without doing anything with server, client will connect with any problems.
What can cause a problem?
In below I attach source code of classes. All source code comes from examples placed in official Netty project page. The only thing which I have midified is that I replaced static variables and functions with non-static ones. It was caused that in future I will need many TCP-UDP connections to multiple servers.
public final class UptimeClient {
static final String HOST = System.getProperty("host", "192.168.2.193");
static final int PORT = Integer.parseInt(System.getProperty("port", "2011"));
static final int RECONNECT_DELAY = Integer.parseInt(System.getProperty("reconnectDelay", "5"));
static final int READ_TIMEOUT = Integer.parseInt(System.getProperty("readTimeout", "10"));
private static UptimeClientHandler handler;
public void runClient() throws Exception {
configureBootstrap(new Bootstrap()).connect();
}
private Bootstrap configureBootstrap(Bootstrap b) {
return configureBootstrap(b, new NioEventLoopGroup());
}
#Override
protected Object clone() throws CloneNotSupportedException {
return super.clone(); //To change body of generated methods, choose Tools | Templates.
}
Bootstrap configureBootstrap(Bootstrap b, EventLoopGroup g) {
if(handler == null){
handler = new UptimeClientHandler(this);
}
b.group(g)
.channel(NioSocketChannel.class)
.remoteAddress(HOST, PORT)
.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new IdleStateHandler(READ_TIMEOUT, 0, 0), handler);
}
});
return b;
}
void connect(Bootstrap b) {
b.connect().addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.cause() != null) {
handler.startTime = -1;
handler.println("Failed to connect: " + future.cause());
}
}
});
}
}
#Sharable
public class UptimeClientHandler extends SimpleChannelInboundHandler<Object> {
UptimeClient client;
public UptimeClientHandler(UptimeClient client){
this.client = client;
}
long startTime = -1;
#Override
public void channelActive(ChannelHandlerContext ctx) {
try {
if (startTime < 0) {
startTime = System.currentTimeMillis();
}
println("Connected to: " + ctx.channel().remoteAddress());
new QuoteOfTheMomentClient(null).run();
} catch (Exception ex) {
Logger.getLogger(UptimeClientHandler.class.getName()).log(Level.SEVERE, null, ex);
}
}
#Override
public void channelRead0(ChannelHandlerContext ctx, Object msg) throws Exception {
}
#Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) {
if (!(evt instanceof IdleStateEvent)) {
return;
}
IdleStateEvent e = (IdleStateEvent) evt;
if (e.state() == IdleState.READER_IDLE) {
// The connection was OK but there was no traffic for last period.
println("Disconnecting due to no inbound traffic");
ctx.close();
}
}
#Override
public void channelInactive(final ChannelHandlerContext ctx) {
println("Disconnected from: " + ctx.channel().remoteAddress());
}
#Override
public void channelUnregistered(final ChannelHandlerContext ctx) throws Exception {
println("Sleeping for: " + UptimeClient.RECONNECT_DELAY + 's');
final EventLoop loop = ctx.channel().eventLoop();
loop.schedule(new Runnable() {
#Override
public void run() {
println("Reconnecting to: " + UptimeClient.HOST + ':' + UptimeClient.PORT);
client.connect(client.configureBootstrap(new Bootstrap(), loop));
}
}, UptimeClient.RECONNECT_DELAY, TimeUnit.SECONDS);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
void println(String msg) {
if (startTime < 0) {
System.err.format("[SERVER IS DOWN] %s%n", msg);
} else {
System.err.format("[UPTIME: %5ds] %s%n", (System.currentTimeMillis() - startTime) / 1000, msg);
}
}
}
public final class QuoteOfTheMomentClient {
private ServerData config;
public QuoteOfTheMomentClient(ServerData config){
this.config = config;
}
public void run() throws Exception {
EventLoopGroup group = new NioEventLoopGroup();
try {
Bootstrap b = new Bootstrap();
b.group(group)
.channel(NioDatagramChannel.class)
.option(ChannelOption.SO_BROADCAST, true)
.handler(new QuoteOfTheMomentClientHandler());
Channel ch = b.bind(0).sync().channel();
ch.writeAndFlush(new DatagramPacket(
Unpooled.copiedBuffer("QOTM?", CharsetUtil.UTF_8),
new InetSocketAddress("192.168.2.193", 8193))).sync();
if (!ch.closeFuture().await(5000)) {
System.err.println("QOTM request timed out.");
}
}
catch(Exception ex)
{
ex.printStackTrace();
}
finally {
group.shutdownGracefully();
}
}
}
public class QuoteOfTheMomentClientHandler extends SimpleChannelInboundHandler<DatagramPacket> {
#Override
public void channelRead0(ChannelHandlerContext ctx, DatagramPacket msg) throws Exception {
String response = msg.content().toString(CharsetUtil.UTF_8);
if (response.startsWith("QOTM: ")) {
System.out.println("Quote of the Moment: " + response.substring(6));
ctx.close();
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
If your server is Windows Server 2008 (R2 or R2 SP1), this problem is likely described and solved by this stackoverflow answer which refers to Microsoft KB article #2577795
This issue occurs because of a race condition in the Ancillary Function Driver
for WinSock (Afd.sys) that causes sockets to be leaked. With time, the issue
that is described in the "Symptoms" section occurs if all available socket
resources are exhausted.
If your server is Windows Server 2003, this problem is likely described and solved by this stackoverflow answer which refers to Microsoft KB article #196271
The default maximum number of ephemeral TCP ports is 5000 in the products that
are included in the "Applies to" section. A new parameter has been added in
these products. To increase the maximum number of ephemeral ports, follow these
steps...
...which basically means that you have run out of ephemeral ports.