I have the following situation:
A new channel connection is opened in this way:
ClientBootstrap bootstrap = new ClientBootstrap(
new OioClientSocketChannelFactory(Executors.newCachedThreadPool()));
icapClientChannelPipeline = new ICAPClientChannelPipeline();
bootstrap.setPipelineFactory(icapClientChannelPipeline);
ChannelFuture future = bootstrap.connect(new InetSocketAddress(host, port));
channel = future.awaitUninterruptibly().getChannel();
This is working as expected.
Stuff is written to the channel in the following way:
channel.write(chunk)
This also works as expected when the connection to the server is still alive. But if the server goes down (machine goes offline), the call hangs and doesn't return.
I confirmed this by adding log statements before and after the channel.write(chunk). When the connection is broken, only the log statement before is displayed.
What is causing this? I thought these calls are all async and return immediately? I also tried with NioClientSocketChannelFactory, same behavior.
I tried to use channel.getCloseFuture() but the listener never gets called, I tried to check the channel before writing with channel.isOpen(), channel.isConnected() and channel.isWritable() and they are always true...
How to work around this? No exception is thrown and nothing really happens... Some questions like this one and this one indicate that it isn't possible to detect a channel disconnect without a heartbeat. But I can't implement a heartbeat because I can't change the server side.
Environment: Netty 3, JDK 1.7
Ok, I solved this one on my own last week so I'll add the answer for completness.
I was wrong in 3. because I thought I'll have to change both the client and the server side for a heartbeat. As described in this question you can use the IdleStateAwareHandler for this purpose. I implemented it like this:
The IdleStateAwareHandler:
public class IdleStateAwareHandler extends IdleStateAwareChannelHandler {
#Override
public void channelIdle(ChannelHandlerContext ctx, IdleStateEvent e) {
if (e.getState() == IdleState.READER_IDLE) {
e.getChannel().write("heartbeat-reader_idle");
}
else if (e.getState() == IdleState.WRITER_IDLE) {
Logger.getLogger(IdleStateAwareHandler.class.getName()).log(
Level.WARNING, "WriteIdle detected, closing channel");
e.getChannel().close();
e.getChannel().write("heartbeat-writer_idle");
}
else if (e.getState() == IdleState.ALL_IDLE) {
e.getChannel().write("heartbeat-all_idle");
}
}
}
The PipeLine:
public class ICAPClientChannelPipeline implements ICAPClientPipeline {
ICAPClientHandler icapClientHandler;
ChannelPipeline pipeline;
public ICAPClientChannelPipeline(){
icapClientHandler = new ICAPClientHandler();
pipeline = pipeline();
pipeline.addLast("idleStateHandler", new IdleStateHandler(new HashedWheelTimer(10, TimeUnit.MILLISECONDS), 5, 5, 5));
pipeline.addLast("idleStateAwareHandler", new IdleStateAwareHandler());
pipeline.addLast("encoder",new IcapRequestEncoder());
pipeline.addLast("chunkSeparator",new IcapChunkSeparator(1024*4));
pipeline.addLast("decoder",new IcapResponseDecoder());
pipeline.addLast("chunkAggregator",new IcapChunkAggregator(1024*4));
pipeline.addLast("handler", icapClientHandler);
}
#Override
public ChannelPipeline getPipeline() throws Exception {
return pipeline;
}
}
This detects any read or write idle state on the channel after 5 seconds.
As you can see it is a little bit ICAP-specific but this doesn't matter for the question.
To react to an idle event I need the following listener:
channel.getCloseFuture().addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
doSomething();
}
});
Related
I have created an inbound handler of type SimpleChannelInboundHandler and added to pipeline. My intention is every time a connection is established, I wanted to send an application message called session open message and make the connection ready to send the actual message. To achieve this, the above inbound handler
over rides channelActive() where session open message is sent, In response to that I would get a session open confirmation message. Only after that I should be able to send any number of actual business message. I am using FixedChannelPool and initialised as follows. This works well some time on startup. But if the remote host closes the connection, after that if a message is sent calling the below sendMessage(), the message is sent even before the session open message through channelActive() and its response is obtained. So the server ignores the message as the session is not open yet when the business message was sent.
What I am looking for is, the pool should return only those channel that has called channelActive() event which has already sent the session open message and it has got its session open confirmation message from the server. How to deal with this situation?
public class SessionHandler extends SimpleChannelInboundHandler<byte[]> {
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
super.channelActive(ctx);
if (ctx.channel().isWritable()) {
ctx.channel().writeAndFlush("open session message".getBytes()).;
}
}
}
// At the time of loading the applicaiton
public void init() {
final Bootstrap bootStrap = new Bootstrap();
bootStrap.group(group).channel(NioSocketChannel.class).remoteAddress(hostname, port);
fixedPool = new FixedChannelPool(bootStrap, getChannelHandler(), 5);
// This is done to intialise connection and the channelActive() from above handler is invoked to keep the session open on startup
for (int i = 0; i < config.getMaxConnections(); i++) {
fixedPool.acquire().addListener(new FutureListener<Channel>() {
#Override
public void operationComplete(Future<Channel> future) throws Exception {
if (future.isSuccess()) {
} else {
LOGGER.error(" Channel initialzation failed...>>", future.cause());
}
}
});
}
}
//To actually send the message following method is invoked by the application.
public void sendMessage(final String businessMessage) {
fixedPool.acquire().addListener(new FutureListener<Channel>() {
#Override
public void operationComplete(Future<Channel> future) throws Exception {
if (future.isSuccess()) {
Channel channel = future.get();
if (channel.isOpen() && channel.isActive() && channel.isWritable()) {
channel.writeAndFlush(businessMessage).addListener(new GenericFutureListener<ChannelFuture>() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
// success msg
} else {
// failure msg
}
}
});
fixedPool.release(channel);
}
} else {
// Failure
}
}
});
}
If there is no specific reason that you need to use a FixedChannelPool then you can use another data structure (List/Map) to store the Channels. You can add a channel to the data structure after sending open session message and remove it in the channelInactive method.
If you need to perform bulk operations on channels you can use a ChannelGroup for the purpose.
If you still want you use the FixedChannelPool you may set an attribute in the channel on whether open message was sent:
ctx.channel().attr(OPEN_MESSAGE_SENT).set(true);
you can get the attribute as follows in your sendMessage function:
boolean sent = ctx.channel().attr(OPEN_MESSAGE_SENT).get();
and in the channelInactive you may set the same to false or remove it.
Note OPEN_MESSAGE_SENT is an AttributeKey:
public static final AttributeKey<Boolean> OPEN_MESSAGE_SENT = AttributeKey.valueOf("OPEN_MESSAGE_SENT");
I know this is a rather old question, but I stumbled across the similar issue, not quite the same, but my issue was the ChannelInitializer in the Bootstrap.handler was never called.
The solution was to add the pipeline handlers to the pool handler's channelCreated method.
Here is my pool definition code that works now:
pool = new FixedChannelPool(httpBootstrap, new ChannelPoolHandler() {
#Override
public void channelCreated(Channel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast(HTTP_CODEC, new HttpClientCodec());
pipeline.addLast(HTTP_HANDLER, new NettyHttpClientHandler());
}
#Override
public void channelAcquired(Channel ch) {
// NOOP
}
#Override
public void channelReleased(Channel ch) {
// NOOP
}
}, 10);
So in the getChannelHandler() method I assume you're creating a ChannelPoolHandler in its channelCreated method you could send your session message (ch.writeAndFlush("open session message".getBytes());) assuming you only need to send the session message once when a connection is created, else you if you need to send the session message every time you could add it to the channelAcquired method.
I am trying to use the following code which is an implementation of web sockets in Netty Nio. I have implment a JavaFx Gui and from the Gui I want to read the messages that are received from the Server or from other clients. The NettyClient code is like the following:
public static ChannelFuture callBack () throws Exception{
String host = "localhost";
int port = 8080;
try {
Bootstrap b = new Bootstrap();
b.group(workerGroup);
b.channel(NioSocketChannel.class);
b.option(ChannelOption.SO_KEEPALIVE, true);
b.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new RequestDataEncoder(), new ResponseDataDecoder(),
new ClientHandler(i -> {
synchronized (lock) {
connectedClients = i;
lock.notifyAll();
}
}));
}
});
ChannelFuture f = b.connect(host, port).sync();
//f.channel().closeFuture().sync();
return f;
}
finally {
//workerGroup.shutdownGracefully();
}
}
public static void main(String[] args) throws Exception {
ChannelFuture ret;
ClientHandler obj = new ClientHandler(i -> {
synchronized (lock) {
connectedClients = i;
lock.notifyAll();
}
});
ret = callBack();
int connected = connectedClients;
if (connected != 2) {
System.out.println("The number if the connected clients is not two before locking");
synchronized (lock) {
while (true) {
connected = connectedClients;
if (connected == 2)
break;
System.out.println("The number if the connected clients is not two");
lock.wait();
}
}
}
System.out.println("The number if the connected clients is two: " + connected );
ret.channel().read(); // can I use that from other parts of the code in order to read the incoming messages?
}
How can I use the returned channelFuture from the callBack from other parts of my code in order to read the incoming messages? Do I need to call again callBack, or how can I received the updated message of the channel? Could I possible use from my code (inside a button event) something like ret.channel().read() (so as to take the last message)?
By reading that code,the NettyClient is used to create connection(ClientHandler ),once connect done,ClientHandler.channelActive is called by Netty,if you want send data to server,you should put some code here. if this connection get message form server, ClientHandler.channelRead is called by Netty, put your code to handle message.
You also need to read doc to know how netty encoder/decoder works.
How can I use the returned channelFuture from the callBack from other parts of my code in order to read the incoming messages?
share those ClientHandler created by NettyClient(NettyClient.java line 29)
Do I need to call again callBack, or how can I received the updated message of the channel?
if server message come,ClientHandler.channelRead is called.
Could I possible use from my code (inside a button event) something like ret.channel().read() (so as to take the last message)?
yes you could,but not a netty way,to play with netty,you write callbacks(when message come,when message sent ...),wait netty call your code,that is : the driver is netty,not you.
last,do you really need such a heavy library to do network?if not ,try This code,it simple,easy to understanding
I'm digging a bug in my netty program:I used a heartbeat handler between the server and client,when client system rebooting,the heartbeat handler in server side will be aware of timeout and then close the Channel,but sometimes the listener registered in Channel's CloseFuture never be notified,that's weird.
After digging netty 3.5.7 source code,I figure out that the only way a Channel's CloseFuture be notified is through AbstractChannel.setClosed();May be this method not be executed when Channel is closed,see below:
NioServerSocketPipelineSink:
private static void close(NioServerSocketChannel channel, ChannelFuture future) {
boolean bound = channel.isBound();
try {
if (channel.socket.isOpen()) {
channel.socket.close();
Selector selector = channel.selector;
if (selector != null) {
selector.wakeup();
}
}
// Make sure the boss thread is not running so that that the future
// is notified after a new connection cannot be accepted anymore.
// See NETTY-256 for more information.
channel.shutdownLock.lock();
try {
if (channel.setClosed()) {
future.setSuccess();
if (bound) {
fireChannelUnbound(channel);
}
fireChannelClosed(channel);
} else {
future.setSuccess();
}
} finally {
channel.shutdownLock.unlock();
}
} catch (Throwable t) {
future.setFailure(t);
fireExceptionCaught(channel, t);
}
}
in some platform channel.socket.close() may throw IOException,that means channel.setClosed() may never executed,so the listener registered in CloseFuture may not be notified.
Here is my question:Do you ever encounter this problem? Is the analysis right?
I figure out it's my heartbeat handler cause the problem:never timeout,so never close the channel,below is running in a timer :
if ((now - lastReadTime > heartbeatTimeout)
&& (now - lastWriteTime > heartbeatTimeout)) {
getChannel().close();
stopHeartbeatTimer();
}
where lastReadTime and lastWriteTime are updated like below:
public void writeComplete(ChannelHandlerContext ctx, WriteCompletionEvent e)
throws Exception {
lastWriteTime = System.currentTimeMillis();
super.writeComplete(ctx, e);
}
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e)
throws Exception {
lastReadTime = System.currentTimeMillis();
super.messageReceived(ctx, e);
}
Remote client is Windows xp,current server is Linux,both jdk1.6.
I think the writeComplete still invoked internally after remote client's system is rebooting,although messageReceived not invoked,no IOExceptoin is thrown during this period.
I will redesign the heartbeat handler,attaching a timestamp and a HEART_BEAT flag in heartbeat packet,when the peer side received the packet,send back the packet with the same timestamp and a ACK_HEART_BEAT flag,when the current side received this ack packet,use this timestamp to update lastWriteTime.
Retry Connection in Netty
I am building a client socket system. The requirements are:
First attemtp to connect to the remote server
When the first attempt fails keep on trying until the server is online.
I would like to know whether there is such feature in netty to do it or how best can I solve that.
Thank you very much
This is the code snippet I am struggling with:
protected void connect() throws Exception {
this.bootstrap = new ClientBootstrap(new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()));
// Configure the event pipeline factory.
bootstrap.setPipelineFactory(new SmpPipelineFactory());
bootstrap.setOption("writeBufferHighWaterMark", 10 * 64 * 1024);
bootstrap.setOption("sendBufferSize", 1048576);
bootstrap.setOption("receiveBufferSize", 1048576);
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
// Make a new connection.
final ChannelFuture connectFuture = bootstrap
.connect(new InetSocketAddress(config.getRemoteAddr(), config
.getRemotePort()));
channel = connectFuture.getChannel();
connectFuture.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future)
throws Exception {
if (connectFuture.isSuccess()) {
// Connection attempt succeeded:
// Begin to accept incoming traffic.
channel.setReadable(true);
} else {
// Close the connection if the connection attempt has
// failed.
channel.close();
logger.info("Unable to Connect to the Remote Socket server");
}
}
});
}
Assuming netty 3.x the simplest example would be:
// Configure the client.
ClientBootstrap bootstrap = new ClientBootstrap(
new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()));
ChannelFuture future = null;
while (true)
{
future = bootstrap.connect(new InetSocketAddress("127.0.0.1", 80));
future.awaitUninterruptibly();
if (future.isSuccess())
{
break;
}
}
Obviously you'd want to have your own logic for the loop that set a max number of tries, etc. Netty 4.x has a slightly different bootstrap but the logic is the same. This is also synchronous, blocking, and ignores InterruptedException; in a real application you might register a ChannelFutureListener with the Future and be notified when the Future completes.
Add after OP edited question:
You have a ChannelFutureListener that is getting notified. If you want to then retry the connection you're going to have to either have that listener hold a reference to the bootstrap, or communicate back to your main thread that the connection attempt failed and have it retry the operation. If you have the listener do it (which is the simplest way) be aware that you need to limit the number of retries to prevent an infinite recursion - it's being executed in the context of the Netty worker thread. If you exhaust your retries, again, you'll need to communicate that back to your main thread; you could do that via a volatile variable, or the observer pattern could be used.
When dealing with async you really have to think concurrently. There's a number of ways to skin that particular cat.
Thank you Brian Roach. The connected variable is a volatile and can be accessed outside the code or further processing.
final InetSocketAddress sockAddr = new InetSocketAddress(
config.getRemoteAddr(), config.getRemotePort());
final ChannelFuture connectFuture = bootstrap
.connect(sockAddr);
channel = connectFuture.getChannel();
connectFuture.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future)
throws Exception {
if (future.isSuccess()) {
// Connection attempt succeeded:
// Begin to accept incoming traffic.
channel.setReadable(true);
connected = true;
} else {
// Close the connection if the connection attempt has
// failed.
channel.close();
if(!connected){
logger.debug("Attempt to connect within " + ((double)frequency/(double)1000) + " seconds");
try {
Thread.sleep(frequency);
} catch (InterruptedException e) {
logger.error(e.getMessage());
}
bootstrap.connect(sockAddr).addListener(this);
}
}
}
});
We just finished building a server to store data to disk and fronted it with Netty. During load testing we were seeing Netty scaling to about 8,000 messages per second. Given our systems, this looked really low. For a benchmark, we wrote a Tomcat front-end and run the same load tests. With these tests we were getting roughly 25,000 messages per second.
Here are the specs for our load testing machine:
Macbook Pro Quad core
16GB of RAM
Java 1.6
Here is the load test setup for Netty:
10 threads
100,000 messages per thread
Netty server code (pretty standard) - our Netty pipeline on the server is two handlers: a FrameDecoder and a SimpleChannelHandler that handles the request and response.
Client side JIO using Commons Pool to pool and reuse connections (the pool was sized the same as the # of threads)
Here is the load test setup for Tomcat:
10 threads
100,000 messages per thread
Tomcat 7.0.16 with default configuration using a Servlet to call the server code
Client side using URLConnection without any pooling
My main question is why such a huge different in performance? Is there something obvious with respect to Netty that can get it to run faster than Tomcat?
Edit: Here is the main Netty server code:
NioServerSocketChannelFactory factory = new NioServerSocketChannelFactory();
ServerBootstrap server = new ServerBootstrap(factory);
server.setPipelineFactory(new ChannelPipelineFactory() {
public ChannelPipeline getPipeline() {
RequestDecoder decoder = injector.getInstance(RequestDecoder.class);
ContentStoreChannelHandler handler = injector.getInstance(ContentStoreChannelHandler.class);
return Channels.pipeline(decoder, handler);
}
});
server.setOption("child.tcpNoDelay", true);
server.setOption("child.keepAlive", true);
Channel channel = server.bind(new InetSocketAddress(port));
allChannels.add(channel);
Our handlers look like this:
public class RequestDecoder extends FrameDecoder {
#Override
protected ChannelBuffer decode(ChannelHandlerContext ctx, Channel channel, ChannelBuffer buffer) {
if (buffer.readableBytes() < 4) {
return null;
}
buffer.markReaderIndex();
int length = buffer.readInt();
if (buffer.readableBytes() < length) {
buffer.resetReaderIndex();
return null;
}
return buffer;
}
}
public class ContentStoreChannelHandler extends SimpleChannelHandler {
private final RequestHandler handler;
#Inject
public ContentStoreChannelHandler(RequestHandler handler) {
this.handler = handler;
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
ChannelBuffer in = (ChannelBuffer) e.getMessage();
in.readerIndex(4);
ChannelBuffer out = ChannelBuffers.dynamicBuffer(512);
out.writerIndex(8); // Skip the length and status code
boolean success = handler.handle(new ChannelBufferInputStream(in), new ChannelBufferOutputStream(out), new NettyErrorStream(out));
if (success) {
out.setInt(0, out.writerIndex() - 8); // length
out.setInt(4, 0); // Status
}
Channels.write(e.getChannel(), out, e.getRemoteAddress());
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
Throwable throwable = e.getCause();
ChannelBuffer out = ChannelBuffers.dynamicBuffer(8);
out.writeInt(0); // Length
out.writeInt(Errors.generalException.getCode()); // status
Channels.write(ctx, e.getFuture(), out);
}
#Override
public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent e) {
NettyContentStoreServer.allChannels.add(e.getChannel());
}
}
UPDATE:
I've managed to get my Netty solution to within 4,000/second. A few weeks back I was testing a client side PING in my connection pool as a safe guard against idle sockets but I forgot to remove that code before I started load testing. This code effectively PINGed the server every time a Socket was checked out from the pool (using Commons Pool). I commented that code out and I'm now getting 21,000/second with Netty and 25,000/second with Tomcat.
Although, this is great news on the Netty side, I'm still getting 4,000/second less with Netty than Tomcat. I can post my client side (which I thought I had ruled out but apparently not) if anyone is interested in seeing that.
The method messageReceived is executed using a worker thread that is possibly getting blocked by RequestHandler#handle which may be busy doing some I/O work.
You could try adding into the channel pipeline an OrderdMemoryAwareThreadPoolExecutor (recommended) for executing the handlers or alternatively, try dispatching your handler work to a new ThreadPoolExecutor and passing a reference to the socket channel for later writing the response back to client. Ex.:
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
executor.submit(new Runnable() {
processHandlerAndRespond(e);
});
}
private void processHandlerAndRespond(MessageEvent e) {
ChannelBuffer in = (ChannelBuffer) e.getMessage();
in.readerIndex(4);
ChannelBuffer out = ChannelBuffers.dynamicBuffer(512);
out.writerIndex(8); // Skip the length and status code
boolean success = handler.handle(new ChannelBufferInputStream(in), new ChannelBufferOutputStream(out), new NettyErrorStream(out));
if (success) {
out.setInt(0, out.writerIndex() - 8); // length
out.setInt(4, 0); // Status
}
Channels.write(e.getChannel(), out, e.getRemoteAddress());
}