I'm digging a bug in my netty program:I used a heartbeat handler between the server and client,when client system rebooting,the heartbeat handler in server side will be aware of timeout and then close the Channel,but sometimes the listener registered in Channel's CloseFuture never be notified,that's weird.
After digging netty 3.5.7 source code,I figure out that the only way a Channel's CloseFuture be notified is through AbstractChannel.setClosed();May be this method not be executed when Channel is closed,see below:
NioServerSocketPipelineSink:
private static void close(NioServerSocketChannel channel, ChannelFuture future) {
boolean bound = channel.isBound();
try {
if (channel.socket.isOpen()) {
channel.socket.close();
Selector selector = channel.selector;
if (selector != null) {
selector.wakeup();
}
}
// Make sure the boss thread is not running so that that the future
// is notified after a new connection cannot be accepted anymore.
// See NETTY-256 for more information.
channel.shutdownLock.lock();
try {
if (channel.setClosed()) {
future.setSuccess();
if (bound) {
fireChannelUnbound(channel);
}
fireChannelClosed(channel);
} else {
future.setSuccess();
}
} finally {
channel.shutdownLock.unlock();
}
} catch (Throwable t) {
future.setFailure(t);
fireExceptionCaught(channel, t);
}
}
in some platform channel.socket.close() may throw IOException,that means channel.setClosed() may never executed,so the listener registered in CloseFuture may not be notified.
Here is my question:Do you ever encounter this problem? Is the analysis right?
I figure out it's my heartbeat handler cause the problem:never timeout,so never close the channel,below is running in a timer :
if ((now - lastReadTime > heartbeatTimeout)
&& (now - lastWriteTime > heartbeatTimeout)) {
getChannel().close();
stopHeartbeatTimer();
}
where lastReadTime and lastWriteTime are updated like below:
public void writeComplete(ChannelHandlerContext ctx, WriteCompletionEvent e)
throws Exception {
lastWriteTime = System.currentTimeMillis();
super.writeComplete(ctx, e);
}
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e)
throws Exception {
lastReadTime = System.currentTimeMillis();
super.messageReceived(ctx, e);
}
Remote client is Windows xp,current server is Linux,both jdk1.6.
I think the writeComplete still invoked internally after remote client's system is rebooting,although messageReceived not invoked,no IOExceptoin is thrown during this period.
I will redesign the heartbeat handler,attaching a timestamp and a HEART_BEAT flag in heartbeat packet,when the peer side received the packet,send back the packet with the same timestamp and a ACK_HEART_BEAT flag,when the current side received this ack packet,use this timestamp to update lastWriteTime.
Related
I have created an inbound handler of type SimpleChannelInboundHandler and added to pipeline. My intention is every time a connection is established, I wanted to send an application message called session open message and make the connection ready to send the actual message. To achieve this, the above inbound handler
over rides channelActive() where session open message is sent, In response to that I would get a session open confirmation message. Only after that I should be able to send any number of actual business message. I am using FixedChannelPool and initialised as follows. This works well some time on startup. But if the remote host closes the connection, after that if a message is sent calling the below sendMessage(), the message is sent even before the session open message through channelActive() and its response is obtained. So the server ignores the message as the session is not open yet when the business message was sent.
What I am looking for is, the pool should return only those channel that has called channelActive() event which has already sent the session open message and it has got its session open confirmation message from the server. How to deal with this situation?
public class SessionHandler extends SimpleChannelInboundHandler<byte[]> {
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
super.channelActive(ctx);
if (ctx.channel().isWritable()) {
ctx.channel().writeAndFlush("open session message".getBytes()).;
}
}
}
// At the time of loading the applicaiton
public void init() {
final Bootstrap bootStrap = new Bootstrap();
bootStrap.group(group).channel(NioSocketChannel.class).remoteAddress(hostname, port);
fixedPool = new FixedChannelPool(bootStrap, getChannelHandler(), 5);
// This is done to intialise connection and the channelActive() from above handler is invoked to keep the session open on startup
for (int i = 0; i < config.getMaxConnections(); i++) {
fixedPool.acquire().addListener(new FutureListener<Channel>() {
#Override
public void operationComplete(Future<Channel> future) throws Exception {
if (future.isSuccess()) {
} else {
LOGGER.error(" Channel initialzation failed...>>", future.cause());
}
}
});
}
}
//To actually send the message following method is invoked by the application.
public void sendMessage(final String businessMessage) {
fixedPool.acquire().addListener(new FutureListener<Channel>() {
#Override
public void operationComplete(Future<Channel> future) throws Exception {
if (future.isSuccess()) {
Channel channel = future.get();
if (channel.isOpen() && channel.isActive() && channel.isWritable()) {
channel.writeAndFlush(businessMessage).addListener(new GenericFutureListener<ChannelFuture>() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
// success msg
} else {
// failure msg
}
}
});
fixedPool.release(channel);
}
} else {
// Failure
}
}
});
}
If there is no specific reason that you need to use a FixedChannelPool then you can use another data structure (List/Map) to store the Channels. You can add a channel to the data structure after sending open session message and remove it in the channelInactive method.
If you need to perform bulk operations on channels you can use a ChannelGroup for the purpose.
If you still want you use the FixedChannelPool you may set an attribute in the channel on whether open message was sent:
ctx.channel().attr(OPEN_MESSAGE_SENT).set(true);
you can get the attribute as follows in your sendMessage function:
boolean sent = ctx.channel().attr(OPEN_MESSAGE_SENT).get();
and in the channelInactive you may set the same to false or remove it.
Note OPEN_MESSAGE_SENT is an AttributeKey:
public static final AttributeKey<Boolean> OPEN_MESSAGE_SENT = AttributeKey.valueOf("OPEN_MESSAGE_SENT");
I know this is a rather old question, but I stumbled across the similar issue, not quite the same, but my issue was the ChannelInitializer in the Bootstrap.handler was never called.
The solution was to add the pipeline handlers to the pool handler's channelCreated method.
Here is my pool definition code that works now:
pool = new FixedChannelPool(httpBootstrap, new ChannelPoolHandler() {
#Override
public void channelCreated(Channel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast(HTTP_CODEC, new HttpClientCodec());
pipeline.addLast(HTTP_HANDLER, new NettyHttpClientHandler());
}
#Override
public void channelAcquired(Channel ch) {
// NOOP
}
#Override
public void channelReleased(Channel ch) {
// NOOP
}
}, 10);
So in the getChannelHandler() method I assume you're creating a ChannelPoolHandler in its channelCreated method you could send your session message (ch.writeAndFlush("open session message".getBytes());) assuming you only need to send the session message once when a connection is created, else you if you need to send the session message every time you could add it to the channelAcquired method.
Here's what I know so far (please correct me):
In the RabbitMQ Java client, operations on a channel throw IOException when there is a general network failure (malformed data from broker, authentication failures, missed heartbeats).
Operations on a channel can also throw the ShutdownSignalException unchecked exception, typically an AlreadyClosedException when we tried to perform an action on the channel/connection after it has been shut down.
The shutting down process happens in the event of "network failure, internal failure or explicit local shutdown" (e.g. via channel.close() or connection.close()). The shutdown event propagates down the "topology", from Connection -> Channel -> Consumer, and when the Channel it calls the Consumer's handleShutdown() method gets called.
A user can also add a shutdown listener which is called after the shutdown process completes.
Here is what I'm missing:
Since an IOException indicates a network failure, does it also initiate a shutdown request?
How does using auto-recovery mode affect shutdown requests? Does it cause channel operations to block while it tries to reconnect to the channel, or will the ShutdownSignalException still be thrown?
Here is how I'm handling exceptions at the moment, is this a sensible approach?
My setup is that I'm polling a QueueingConsumer and dispatching tasks to a worker pool. The rabbitmq client is encapsulated in MyRabbitMQWrapper here. When an exception occurs polling the queue I just gracefully shutdown everything and restart the client. When an exception occurs in the worker I also just log it and finish the worker.
My biggest worry (related to Question 1): Suppose an IOException occurs in the worker, then the task doesn't get acked. If the shutdown does not then occur, I now have an un-acked task that will be in limbo forever.
Pseudo-code:
class Main {
public static void main(String[] args) {
while(true) {
run();
//Easy way to restart the client, the connection has been
//closed so RabbitMQ will re-queue any un-acked tasks.
log.info("Shutdown occurred, restarting in 5 seconds");
Thread.sleep(5000);
}
}
public void run() {
MyRabbitMQWrapper rw = new MyRabbitMQWrapper("localhost");
try {
rw.connect();
while(!Thread.currentThread().isInterrupted()) {
try {
//Wait for a message on the QueueingConsumer
MyMessage t = rw.getNextMessage();
workerPool.submit(new MyTaskRunnable(rw, t));
} catch (InterruptedException | IOException | ShutdownSignalException e) {
//Handle all AMQP library exceptions by cleaning up and returning
log.warn("Shutting down", e);
workerPool.shutdown();
break;
}
}
} catch (IOException e) {
log.error("Could not connect to broker", e);
} finally {
try {
rw.close();
} catch(IOException e) {
log.info("Could not close connection");
}
}
}
}
class MyTaskRunnable implements Runnable {
....
public void run() {
doStuff();
try {
rw.ack(...);
} catch (IOException | ShutdownSignalException e) {
log.warn("Could not ack task");
}
}
}
I'm developing a Google Glass app which needs to listen for UDP packets in a worker thread (integrating with an existing system which sends UDP packets). I previously posted a question (see here) and received an answer which provided some guidance on how to do this. Using the approach in the other discussion I'll have a worker thread which is blocked on DatagramSocket.receive().
Further reading suggests to me that I'll need to be able to start/stop this on demand. So this brings me to the question I'm posting here. How can I do the above in such a way as to be able to interrupt (gracefully) the UDP listening? Is there some way I can "nicely" ask the socket to break out of the receive() call from another thread?
Or is there another way to listen for UDP packets in an interruptable fashion so I can start/stop the listener thread as needed in response to device events?
My recommendation:
private DatagramSocket mSocket;
#Override
public void run() {
Exception ex = null;
try {
// read while not interrupted
while (!interrupted()) {
....
mSocket.receive(...); // excepts when interrupted
}
} catch (Exception e) {
if (interrupted())
// the user did it
else
ex = e;
} finally {
// always release
release();
// rethrow the exception if we need to
if (ex != null)
throw ex;
}
}
public void release() {
// causes exception if in middle of rcv
if (mSocket != null) {
mSocket.close();
mSocket = null;
}
}
#Override
public void interrupt() {
super.interrupt();
release();
}
clean cut, simple, always releases and interrupting stops you cleanly in 2 cases.
I have the following situation:
A new channel connection is opened in this way:
ClientBootstrap bootstrap = new ClientBootstrap(
new OioClientSocketChannelFactory(Executors.newCachedThreadPool()));
icapClientChannelPipeline = new ICAPClientChannelPipeline();
bootstrap.setPipelineFactory(icapClientChannelPipeline);
ChannelFuture future = bootstrap.connect(new InetSocketAddress(host, port));
channel = future.awaitUninterruptibly().getChannel();
This is working as expected.
Stuff is written to the channel in the following way:
channel.write(chunk)
This also works as expected when the connection to the server is still alive. But if the server goes down (machine goes offline), the call hangs and doesn't return.
I confirmed this by adding log statements before and after the channel.write(chunk). When the connection is broken, only the log statement before is displayed.
What is causing this? I thought these calls are all async and return immediately? I also tried with NioClientSocketChannelFactory, same behavior.
I tried to use channel.getCloseFuture() but the listener never gets called, I tried to check the channel before writing with channel.isOpen(), channel.isConnected() and channel.isWritable() and they are always true...
How to work around this? No exception is thrown and nothing really happens... Some questions like this one and this one indicate that it isn't possible to detect a channel disconnect without a heartbeat. But I can't implement a heartbeat because I can't change the server side.
Environment: Netty 3, JDK 1.7
Ok, I solved this one on my own last week so I'll add the answer for completness.
I was wrong in 3. because I thought I'll have to change both the client and the server side for a heartbeat. As described in this question you can use the IdleStateAwareHandler for this purpose. I implemented it like this:
The IdleStateAwareHandler:
public class IdleStateAwareHandler extends IdleStateAwareChannelHandler {
#Override
public void channelIdle(ChannelHandlerContext ctx, IdleStateEvent e) {
if (e.getState() == IdleState.READER_IDLE) {
e.getChannel().write("heartbeat-reader_idle");
}
else if (e.getState() == IdleState.WRITER_IDLE) {
Logger.getLogger(IdleStateAwareHandler.class.getName()).log(
Level.WARNING, "WriteIdle detected, closing channel");
e.getChannel().close();
e.getChannel().write("heartbeat-writer_idle");
}
else if (e.getState() == IdleState.ALL_IDLE) {
e.getChannel().write("heartbeat-all_idle");
}
}
}
The PipeLine:
public class ICAPClientChannelPipeline implements ICAPClientPipeline {
ICAPClientHandler icapClientHandler;
ChannelPipeline pipeline;
public ICAPClientChannelPipeline(){
icapClientHandler = new ICAPClientHandler();
pipeline = pipeline();
pipeline.addLast("idleStateHandler", new IdleStateHandler(new HashedWheelTimer(10, TimeUnit.MILLISECONDS), 5, 5, 5));
pipeline.addLast("idleStateAwareHandler", new IdleStateAwareHandler());
pipeline.addLast("encoder",new IcapRequestEncoder());
pipeline.addLast("chunkSeparator",new IcapChunkSeparator(1024*4));
pipeline.addLast("decoder",new IcapResponseDecoder());
pipeline.addLast("chunkAggregator",new IcapChunkAggregator(1024*4));
pipeline.addLast("handler", icapClientHandler);
}
#Override
public ChannelPipeline getPipeline() throws Exception {
return pipeline;
}
}
This detects any read or write idle state on the channel after 5 seconds.
As you can see it is a little bit ICAP-specific but this doesn't matter for the question.
To react to an idle event I need the following listener:
channel.getCloseFuture().addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
doSomething();
}
});
Retry Connection in Netty
I am building a client socket system. The requirements are:
First attemtp to connect to the remote server
When the first attempt fails keep on trying until the server is online.
I would like to know whether there is such feature in netty to do it or how best can I solve that.
Thank you very much
This is the code snippet I am struggling with:
protected void connect() throws Exception {
this.bootstrap = new ClientBootstrap(new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()));
// Configure the event pipeline factory.
bootstrap.setPipelineFactory(new SmpPipelineFactory());
bootstrap.setOption("writeBufferHighWaterMark", 10 * 64 * 1024);
bootstrap.setOption("sendBufferSize", 1048576);
bootstrap.setOption("receiveBufferSize", 1048576);
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
// Make a new connection.
final ChannelFuture connectFuture = bootstrap
.connect(new InetSocketAddress(config.getRemoteAddr(), config
.getRemotePort()));
channel = connectFuture.getChannel();
connectFuture.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future)
throws Exception {
if (connectFuture.isSuccess()) {
// Connection attempt succeeded:
// Begin to accept incoming traffic.
channel.setReadable(true);
} else {
// Close the connection if the connection attempt has
// failed.
channel.close();
logger.info("Unable to Connect to the Remote Socket server");
}
}
});
}
Assuming netty 3.x the simplest example would be:
// Configure the client.
ClientBootstrap bootstrap = new ClientBootstrap(
new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()));
ChannelFuture future = null;
while (true)
{
future = bootstrap.connect(new InetSocketAddress("127.0.0.1", 80));
future.awaitUninterruptibly();
if (future.isSuccess())
{
break;
}
}
Obviously you'd want to have your own logic for the loop that set a max number of tries, etc. Netty 4.x has a slightly different bootstrap but the logic is the same. This is also synchronous, blocking, and ignores InterruptedException; in a real application you might register a ChannelFutureListener with the Future and be notified when the Future completes.
Add after OP edited question:
You have a ChannelFutureListener that is getting notified. If you want to then retry the connection you're going to have to either have that listener hold a reference to the bootstrap, or communicate back to your main thread that the connection attempt failed and have it retry the operation. If you have the listener do it (which is the simplest way) be aware that you need to limit the number of retries to prevent an infinite recursion - it's being executed in the context of the Netty worker thread. If you exhaust your retries, again, you'll need to communicate that back to your main thread; you could do that via a volatile variable, or the observer pattern could be used.
When dealing with async you really have to think concurrently. There's a number of ways to skin that particular cat.
Thank you Brian Roach. The connected variable is a volatile and can be accessed outside the code or further processing.
final InetSocketAddress sockAddr = new InetSocketAddress(
config.getRemoteAddr(), config.getRemotePort());
final ChannelFuture connectFuture = bootstrap
.connect(sockAddr);
channel = connectFuture.getChannel();
connectFuture.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future)
throws Exception {
if (future.isSuccess()) {
// Connection attempt succeeded:
// Begin to accept incoming traffic.
channel.setReadable(true);
connected = true;
} else {
// Close the connection if the connection attempt has
// failed.
channel.close();
if(!connected){
logger.debug("Attempt to connect within " + ((double)frequency/(double)1000) + " seconds");
try {
Thread.sleep(frequency);
} catch (InterruptedException e) {
logger.error(e.getMessage());
}
bootstrap.connect(sockAddr).addListener(this);
}
}
}
});