Netty Connection Retries - java

Retry Connection in Netty
I am building a client socket system. The requirements are:
First attemtp to connect to the remote server
When the first attempt fails keep on trying until the server is online.
I would like to know whether there is such feature in netty to do it or how best can I solve that.
Thank you very much
This is the code snippet I am struggling with:
protected void connect() throws Exception {
this.bootstrap = new ClientBootstrap(new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()));
// Configure the event pipeline factory.
bootstrap.setPipelineFactory(new SmpPipelineFactory());
bootstrap.setOption("writeBufferHighWaterMark", 10 * 64 * 1024);
bootstrap.setOption("sendBufferSize", 1048576);
bootstrap.setOption("receiveBufferSize", 1048576);
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
// Make a new connection.
final ChannelFuture connectFuture = bootstrap
.connect(new InetSocketAddress(config.getRemoteAddr(), config
.getRemotePort()));
channel = connectFuture.getChannel();
connectFuture.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future)
throws Exception {
if (connectFuture.isSuccess()) {
// Connection attempt succeeded:
// Begin to accept incoming traffic.
channel.setReadable(true);
} else {
// Close the connection if the connection attempt has
// failed.
channel.close();
logger.info("Unable to Connect to the Remote Socket server");
}
}
});
}

Assuming netty 3.x the simplest example would be:
// Configure the client.
ClientBootstrap bootstrap = new ClientBootstrap(
new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()));
ChannelFuture future = null;
while (true)
{
future = bootstrap.connect(new InetSocketAddress("127.0.0.1", 80));
future.awaitUninterruptibly();
if (future.isSuccess())
{
break;
}
}
Obviously you'd want to have your own logic for the loop that set a max number of tries, etc. Netty 4.x has a slightly different bootstrap but the logic is the same. This is also synchronous, blocking, and ignores InterruptedException; in a real application you might register a ChannelFutureListener with the Future and be notified when the Future completes.
Add after OP edited question:
You have a ChannelFutureListener that is getting notified. If you want to then retry the connection you're going to have to either have that listener hold a reference to the bootstrap, or communicate back to your main thread that the connection attempt failed and have it retry the operation. If you have the listener do it (which is the simplest way) be aware that you need to limit the number of retries to prevent an infinite recursion - it's being executed in the context of the Netty worker thread. If you exhaust your retries, again, you'll need to communicate that back to your main thread; you could do that via a volatile variable, or the observer pattern could be used.
When dealing with async you really have to think concurrently. There's a number of ways to skin that particular cat.

Thank you Brian Roach. The connected variable is a volatile and can be accessed outside the code or further processing.
final InetSocketAddress sockAddr = new InetSocketAddress(
config.getRemoteAddr(), config.getRemotePort());
final ChannelFuture connectFuture = bootstrap
.connect(sockAddr);
channel = connectFuture.getChannel();
connectFuture.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future)
throws Exception {
if (future.isSuccess()) {
// Connection attempt succeeded:
// Begin to accept incoming traffic.
channel.setReadable(true);
connected = true;
} else {
// Close the connection if the connection attempt has
// failed.
channel.close();
if(!connected){
logger.debug("Attempt to connect within " + ((double)frequency/(double)1000) + " seconds");
try {
Thread.sleep(frequency);
} catch (InterruptedException e) {
logger.error(e.getMessage());
}
bootstrap.connect(sockAddr).addListener(this);
}
}
}
});

Related

How is Nettys FileDescriptor usage on OS X

in the PLC4X project we are using Netty for the clients to connect to PLCs which act as server. Sometimes, either by user error or by PLC error the connections are not accepted but rejected. If we retry to build up the connection ASAP multiple times, we run into the error message Too many open files.
I try to clean up everything in my code, so I would assume that there are no filedescriptors that could leak:
try {
final NioEventLoopGroup workerGroup = new NioEventLoopGroup();
Bootstrap bootstrap = new Bootstrap();
bootstrap.group(workerGroup);
bootstrap.channel(NioSocketChannel.class);
bootstrap.option(ChannelOption.SO_KEEPALIVE, true);
bootstrap.option(ChannelOption.TCP_NODELAY, true);
// TODO we should use an explicit (configurable?) timeout here
// bootstrap.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 1000);
bootstrap.handler(channelHandler);
// Start the client.
final ChannelFuture f = bootstrap.connect(address, port);
f.addListener(new GenericFutureListener<Future<? super Void>>() {
#Override public void operationComplete(Future<? super Void> future) throws Exception {
if (!future.isSuccess()) {
logger.info("Unable to connect, shutting down worker thread.");
workerGroup.shutdownGracefully();
}
}
});
// Wait for sync
f.sync();
f.awaitUninterruptibly(); // jf: unsure if we need that
// Wait till the session is finished initializing.
return f.channel();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new PlcConnectionException("Error creating channel.", e);
} catch (Exception e) {
throw new PlcConnectionException("Error creating channel.", e);
}
From my understanding, the Listener should always shutdown the group and free up all descriptors used.
But in reality, when running it on macOS Catalina I see that about 1% of the fails that its not due to "rejection" but due to "Too many open files".
Is this a ulimit thing, as Netty (on macOS) simply needs a number of fd's to use? Or am I leaking something?
Thanks for clarification!
I found out the solution, kind of myself.
There are 2 issues (probably even 3) in an original implementation, which are not really related to Mac OS X:
connect and addListener should be chained
workerGroup.shutdownGracefully() is triggered in another thread, so the main (called) thread already finishes
its not awaited that the workerGroup really finishes.
This together can lead to situations as it seems, where new groups are spawned faster than old groups are closed.
Thus, I changed the implementation to
try {
final NioEventLoopGroup workerGroup = new NioEventLoopGroup();
Bootstrap bootstrap = new Bootstrap();
bootstrap.group(workerGroup);
bootstrap.channel(NioSocketChannel.class);
bootstrap.option(ChannelOption.SO_KEEPALIVE, true);
bootstrap.option(ChannelOption.TCP_NODELAY, true);
// TODO we should use an explicit (configurable?) timeout here
// bootstrap.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 1000);
bootstrap.handler(channelHandler);
// Start the client.
logger.trace("Starting connection attempt on tcp layer to {}:{}", address.getHostAddress(), port);
final ChannelFuture f = bootstrap.connect(address, port);
// Wait for sync
try {
f.sync();
} catch (Exception e) {
// Shutdown worker group here and wait for it
logger.info("Unable to connect, shutting down worker thread.");
workerGroup.shutdownGracefully().awaitUninterruptibly();
logger.debug("Worker Group is shutdown successfully.");
throw new PlcConnectionException("Unable to Connect on TCP Layer to " + address.getHostAddress() + ":" + port, e);
}
// Wait till the session is finished initializing.
return f.channel();
}
catch (Exception e) {
throw new PlcConnectionException("Error creating channel.", e);
}
which adresses the issues above. Thus, the call only finishes when its properly cleaned up.
My tests now show a constant number of open file descriptors.

Netty Nio read the upcoming messages from ChannelFuture in Java

I am trying to use the following code which is an implementation of web sockets in Netty Nio. I have implment a JavaFx Gui and from the Gui I want to read the messages that are received from the Server or from other clients. The NettyClient code is like the following:
public static ChannelFuture callBack () throws Exception{
String host = "localhost";
int port = 8080;
try {
Bootstrap b = new Bootstrap();
b.group(workerGroup);
b.channel(NioSocketChannel.class);
b.option(ChannelOption.SO_KEEPALIVE, true);
b.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new RequestDataEncoder(), new ResponseDataDecoder(),
new ClientHandler(i -> {
synchronized (lock) {
connectedClients = i;
lock.notifyAll();
}
}));
}
});
ChannelFuture f = b.connect(host, port).sync();
//f.channel().closeFuture().sync();
return f;
}
finally {
//workerGroup.shutdownGracefully();
}
}
public static void main(String[] args) throws Exception {
ChannelFuture ret;
ClientHandler obj = new ClientHandler(i -> {
synchronized (lock) {
connectedClients = i;
lock.notifyAll();
}
});
ret = callBack();
int connected = connectedClients;
if (connected != 2) {
System.out.println("The number if the connected clients is not two before locking");
synchronized (lock) {
while (true) {
connected = connectedClients;
if (connected == 2)
break;
System.out.println("The number if the connected clients is not two");
lock.wait();
}
}
}
System.out.println("The number if the connected clients is two: " + connected );
ret.channel().read(); // can I use that from other parts of the code in order to read the incoming messages?
}
How can I use the returned channelFuture from the callBack from other parts of my code in order to read the incoming messages? Do I need to call again callBack, or how can I received the updated message of the channel? Could I possible use from my code (inside a button event) something like ret.channel().read() (so as to take the last message)?
By reading that code,the NettyClient is used to create connection(ClientHandler ),once connect done,ClientHandler.channelActive is called by Netty,if you want send data to server,you should put some code here. if this connection get message form server, ClientHandler.channelRead is called by Netty, put your code to handle message.
You also need to read doc to know how netty encoder/decoder works.
How can I use the returned channelFuture from the callBack from other parts of my code in order to read the incoming messages?
share those ClientHandler created by NettyClient(NettyClient.java line 29)
Do I need to call again callBack, or how can I received the updated message of the channel?
if server message come,ClientHandler.channelRead is called.
Could I possible use from my code (inside a button event) something like ret.channel().read() (so as to take the last message)?
yes you could,but not a netty way,to play with netty,you write callbacks(when message come,when message sent ...),wait netty call your code,that is : the driver is netty,not you.
last,do you really need such a heavy library to do network?if not ,try This code,it simple,easy to understanding

Netty 4.0.23 multiple hosts single client

My question is about creating multiple TCP clients to multiple hosts using the same event loop group in Netty 4.0.23 Final, I must admit that I don't quite understand Netty 4's client threading business, especially with the loads of confusing references to Netty 3.X.X implementations I hit through my research on the internet.
with the following code, I establish a connection with a single server, and send random commands using a command queue:
public class TCPsocket {
private static final CircularFifoQueue CommandQueue = new CircularFifoQueue(20);
private final EventLoopGroup workerGroup;
private final TcpClientInitializer tcpHandlerInit; // all handlers shearable
public TCPsocket() {
workerGroup = new NioEventLoopGroup();
tcpHandlerInit = new TcpClientInitializer();
}
public void connect(String host, int port) throws InterruptedException {
try {
Bootstrap b = new Bootstrap();
b.group(workerGroup);
b.channel(NioSocketChannel.class);
b.remoteAddress(host, port);
b.handler(tcpHandlerInit);
Channel ch = b.connect().sync().channel();
ChannelFuture writeCommand = null;
for (;;) {
if (!CommandQueue.isEmpty()) {
writeCommand = ch.writeAndFlush(CommandExecute()); // commandExecute() fetches a command form the commandQueue and encodes it into a byte array
}
if (CommandQueue.isFull()) { // this will never happen ... or should never happen
ch.closeFuture().sync();
break;
}
}
if (writeCommand != null) {
writeCommand.sync();
}
} finally {
workerGroup.shutdownGracefully();
}
}
public static void main(String args[]) throws InterruptedException {
TCPsocket socket = new TCPsocket();
socket.connect("192.168.0.1", 2101);
}
}
in addition to executing commands off of the command queue, this client keeps receiving periodic responses from the serve as a response to an initial command that is sent as soon as the channel becomes active, in one of the registered handlers (in TCPClientInitializer implementation), I have:
#Override
public void channelActive(ChannelHandlerContext ctx) {
ctx.writeAndFlush(firstMessage);
System.out.println("sent first message\n");
}
which activates a feature in the connected-to server, triggering a periodic packet that is returned from the server through the life span of my application.
The problem comes when I try to use this same setup to connect to multiple servers,
by looping through a string array of known server IPs:
public static void main(String args[]) throws InterruptedException {
String[] hosts = new String[]{"192.168.0.2", "192.168.0.4", "192.168.0.5"};
TCPsocket socket = new TCPsocket();
for (String host : hosts) {
socket.connect(host, 2101);
}
}
once the first connection is established, and the server (192.168.0.2) starts sending the designated periodic packets, no other connection is attempted, which (I think) is the result of the main thread waiting on the connection to die, hence never running the second iteration of the for loop, the discussion in this question leads me to think that the connection process is started in a separate thread, allowing the main thread to continue executing, but that's not what I see here, So what is actually happening? And how would I go about implementing multiple hosts connections using the same client in Netty 4.0.23 Final?
Thanks in advance

Exception during Netty server shutdown

I have application running on Tomcat. I use Netty 4 for websocket handling.
Netty server run in ServletContextListener in contextInitialized method and stop in contextDestroyed.
This my class for Netty server:
public class WebSocketServer {
private final int port;
private final EventLoopGroup bossGroup;
private final EventLoopGroup workerGroup;
private Channel serverChannel;
public WebSocketServer(int port) {
this.port = port;
bossGroup = new NioEventLoopGroup(1);
workerGroup = new NioEventLoopGroup();
}
public void run() throws Exception {
final ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class)
.childHandler(new WebSocketServerInitializer());
serverChannel = b.bind(port).sync().channel();
System.out.println("Web socket server started at port " + port + '.');
System.out
.println("Open your browser and navigate to http://localhost:"
+ port + '/');
}
public void stop() {
if (serverChannel != null) {
ChannelFuture chFuture = serverChannel.close();
chFuture.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
shutdownWorkers();
}
});
} else {
shutdownWorkers();
}
}
private void shutdownWorkers() {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
It's work fine after running, but when I try stop Tomcat I get exception:
INFO: Illegal access: this web application instance has been stopped already. Could not load io.netty.util.concurrent.DefaultPromise$3. The eventual following stack trace is caused by an error thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access, and has no functional impact.
java.lang.IllegalStateException
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1610)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1569)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:592)
at io.netty.util.concurrent.DefaultPromise.setSuccess(DefaultPromise.java:403)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:139)
at java.lang.Thread.run(Thread.java:662)
After Tomcat hangs up.
What can be reason?
I assume you call shutdownWorkers() somewhere from Servlet.destroy() or use some other mechanism that ensures your Server goes down when servlet stops / unloads.
Then you need to do
void shutdownWorkers() {
Future fb = trbossGroup.shutdownGracefully();
Future fw = workerGroup.shutdownGracefully();
try {
fb.await();
fw.await();
} catch (InterruptedException ignore) {}
}
It is because shutdownGracefully() returns a Future, and, well, without waiting for it to come, you leave things that try to close the connections in very stressful environment. It also makes sense to first initiate all shutdown's and then wait till futures are awailable, this way it all runs in parallel and happens faster.
It fixed the issue for me. Obviously, you can make it nicer to your system without swallowing InterruptedException and wrapping each call in a nice method and putting reasonable timeout for each await(). Nice excercise in general, but in reality most probably you wouldn't care at this point in your code.
Side note: and yes, for WebSockets you will be better off with Tomcat's native, standards-compliant and robust implementation. Netty is awseome for many other things, but would be a wrong tool here.

Netty: channel.write hangs on disconnect

I have the following situation:
A new channel connection is opened in this way:
ClientBootstrap bootstrap = new ClientBootstrap(
new OioClientSocketChannelFactory(Executors.newCachedThreadPool()));
icapClientChannelPipeline = new ICAPClientChannelPipeline();
bootstrap.setPipelineFactory(icapClientChannelPipeline);
ChannelFuture future = bootstrap.connect(new InetSocketAddress(host, port));
channel = future.awaitUninterruptibly().getChannel();
This is working as expected.
Stuff is written to the channel in the following way:
channel.write(chunk)
This also works as expected when the connection to the server is still alive. But if the server goes down (machine goes offline), the call hangs and doesn't return.
I confirmed this by adding log statements before and after the channel.write(chunk). When the connection is broken, only the log statement before is displayed.
What is causing this? I thought these calls are all async and return immediately? I also tried with NioClientSocketChannelFactory, same behavior.
I tried to use channel.getCloseFuture() but the listener never gets called, I tried to check the channel before writing with channel.isOpen(), channel.isConnected() and channel.isWritable() and they are always true...
How to work around this? No exception is thrown and nothing really happens... Some questions like this one and this one indicate that it isn't possible to detect a channel disconnect without a heartbeat. But I can't implement a heartbeat because I can't change the server side.
Environment: Netty 3, JDK 1.7
Ok, I solved this one on my own last week so I'll add the answer for completness.
I was wrong in 3. because I thought I'll have to change both the client and the server side for a heartbeat. As described in this question you can use the IdleStateAwareHandler for this purpose. I implemented it like this:
The IdleStateAwareHandler:
public class IdleStateAwareHandler extends IdleStateAwareChannelHandler {
#Override
public void channelIdle(ChannelHandlerContext ctx, IdleStateEvent e) {
if (e.getState() == IdleState.READER_IDLE) {
e.getChannel().write("heartbeat-reader_idle");
}
else if (e.getState() == IdleState.WRITER_IDLE) {
Logger.getLogger(IdleStateAwareHandler.class.getName()).log(
Level.WARNING, "WriteIdle detected, closing channel");
e.getChannel().close();
e.getChannel().write("heartbeat-writer_idle");
}
else if (e.getState() == IdleState.ALL_IDLE) {
e.getChannel().write("heartbeat-all_idle");
}
}
}
The PipeLine:
public class ICAPClientChannelPipeline implements ICAPClientPipeline {
ICAPClientHandler icapClientHandler;
ChannelPipeline pipeline;
public ICAPClientChannelPipeline(){
icapClientHandler = new ICAPClientHandler();
pipeline = pipeline();
pipeline.addLast("idleStateHandler", new IdleStateHandler(new HashedWheelTimer(10, TimeUnit.MILLISECONDS), 5, 5, 5));
pipeline.addLast("idleStateAwareHandler", new IdleStateAwareHandler());
pipeline.addLast("encoder",new IcapRequestEncoder());
pipeline.addLast("chunkSeparator",new IcapChunkSeparator(1024*4));
pipeline.addLast("decoder",new IcapResponseDecoder());
pipeline.addLast("chunkAggregator",new IcapChunkAggregator(1024*4));
pipeline.addLast("handler", icapClientHandler);
}
#Override
public ChannelPipeline getPipeline() throws Exception {
return pipeline;
}
}
This detects any read or write idle state on the channel after 5 seconds.
As you can see it is a little bit ICAP-specific but this doesn't matter for the question.
To react to an idle event I need the following listener:
channel.getCloseFuture().addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
doSomething();
}
});

Categories