Netty slower than Tomcat - java

We just finished building a server to store data to disk and fronted it with Netty. During load testing we were seeing Netty scaling to about 8,000 messages per second. Given our systems, this looked really low. For a benchmark, we wrote a Tomcat front-end and run the same load tests. With these tests we were getting roughly 25,000 messages per second.
Here are the specs for our load testing machine:
Macbook Pro Quad core
16GB of RAM
Java 1.6
Here is the load test setup for Netty:
10 threads
100,000 messages per thread
Netty server code (pretty standard) - our Netty pipeline on the server is two handlers: a FrameDecoder and a SimpleChannelHandler that handles the request and response.
Client side JIO using Commons Pool to pool and reuse connections (the pool was sized the same as the # of threads)
Here is the load test setup for Tomcat:
10 threads
100,000 messages per thread
Tomcat 7.0.16 with default configuration using a Servlet to call the server code
Client side using URLConnection without any pooling
My main question is why such a huge different in performance? Is there something obvious with respect to Netty that can get it to run faster than Tomcat?
Edit: Here is the main Netty server code:
NioServerSocketChannelFactory factory = new NioServerSocketChannelFactory();
ServerBootstrap server = new ServerBootstrap(factory);
server.setPipelineFactory(new ChannelPipelineFactory() {
public ChannelPipeline getPipeline() {
RequestDecoder decoder = injector.getInstance(RequestDecoder.class);
ContentStoreChannelHandler handler = injector.getInstance(ContentStoreChannelHandler.class);
return Channels.pipeline(decoder, handler);
}
});
server.setOption("child.tcpNoDelay", true);
server.setOption("child.keepAlive", true);
Channel channel = server.bind(new InetSocketAddress(port));
allChannels.add(channel);
Our handlers look like this:
public class RequestDecoder extends FrameDecoder {
#Override
protected ChannelBuffer decode(ChannelHandlerContext ctx, Channel channel, ChannelBuffer buffer) {
if (buffer.readableBytes() < 4) {
return null;
}
buffer.markReaderIndex();
int length = buffer.readInt();
if (buffer.readableBytes() < length) {
buffer.resetReaderIndex();
return null;
}
return buffer;
}
}
public class ContentStoreChannelHandler extends SimpleChannelHandler {
private final RequestHandler handler;
#Inject
public ContentStoreChannelHandler(RequestHandler handler) {
this.handler = handler;
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
ChannelBuffer in = (ChannelBuffer) e.getMessage();
in.readerIndex(4);
ChannelBuffer out = ChannelBuffers.dynamicBuffer(512);
out.writerIndex(8); // Skip the length and status code
boolean success = handler.handle(new ChannelBufferInputStream(in), new ChannelBufferOutputStream(out), new NettyErrorStream(out));
if (success) {
out.setInt(0, out.writerIndex() - 8); // length
out.setInt(4, 0); // Status
}
Channels.write(e.getChannel(), out, e.getRemoteAddress());
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
Throwable throwable = e.getCause();
ChannelBuffer out = ChannelBuffers.dynamicBuffer(8);
out.writeInt(0); // Length
out.writeInt(Errors.generalException.getCode()); // status
Channels.write(ctx, e.getFuture(), out);
}
#Override
public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent e) {
NettyContentStoreServer.allChannels.add(e.getChannel());
}
}
UPDATE:
I've managed to get my Netty solution to within 4,000/second. A few weeks back I was testing a client side PING in my connection pool as a safe guard against idle sockets but I forgot to remove that code before I started load testing. This code effectively PINGed the server every time a Socket was checked out from the pool (using Commons Pool). I commented that code out and I'm now getting 21,000/second with Netty and 25,000/second with Tomcat.
Although, this is great news on the Netty side, I'm still getting 4,000/second less with Netty than Tomcat. I can post my client side (which I thought I had ruled out but apparently not) if anyone is interested in seeing that.

The method messageReceived is executed using a worker thread that is possibly getting blocked by RequestHandler#handle which may be busy doing some I/O work.
You could try adding into the channel pipeline an OrderdMemoryAwareThreadPoolExecutor (recommended) for executing the handlers or alternatively, try dispatching your handler work to a new ThreadPoolExecutor and passing a reference to the socket channel for later writing the response back to client. Ex.:
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
executor.submit(new Runnable() {
processHandlerAndRespond(e);
});
}
private void processHandlerAndRespond(MessageEvent e) {
ChannelBuffer in = (ChannelBuffer) e.getMessage();
in.readerIndex(4);
ChannelBuffer out = ChannelBuffers.dynamicBuffer(512);
out.writerIndex(8); // Skip the length and status code
boolean success = handler.handle(new ChannelBufferInputStream(in), new ChannelBufferOutputStream(out), new NettyErrorStream(out));
if (success) {
out.setInt(0, out.writerIndex() - 8); // length
out.setInt(4, 0); // Status
}
Channels.write(e.getChannel(), out, e.getRemoteAddress());
}

Related

How does netty's non blocking threading model work

Currently, I'm reading the book "Reactive Programming with RxJava" by Tomasz Nurkiewicz. In chapter 5 he compares two different approaches to build an HTTP server which one of them is based on a netty framework.
And I can't figure out how using such a framework can help to build more responsive server compare to the classic approach with a thread per request blocking IO.
The main concept is to utilize as few threads as possible but if there is some blocking IO operation such as DB access that means the very limited number on concurrent connection could be handled at a time
I've reproduced an example from that book.
Initializing the server:
public static void main(String[] args) throws Exception {
EventLoopGroup bossGroup = new NioEventLoopGroup(1);
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
new ServerBootstrap()
.option(ChannelOption.SO_BACKLOG, 50_000)
.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new HttpInitializer())
.bind(8080)
.sync()
.channel()
.closeFuture()
.sync();
} finally {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
The size of worker group thread pool is availableProcessors * 2 = 8 on my machine.
To simulate some IO operation and be able to see what is going on in the LOG I've added latency(but it could be some business logic invocation) of 1sec to the handler:
class HttpInitializer extends ChannelInitializer<SocketChannel> {
private final HttpHandler httpHandler = new HttpHandler();
#Override
public void initChannel(SocketChannel ch) {
ch
.pipeline()
.addLast(new HttpServerCodec())
.addLast(httpHandler);
}
}
And the handler itself:
class HttpHandler extends ChannelInboundHandlerAdapter {
private static final Logger log = LoggerFactory.getLogger(HttpHandler.class);
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
if (msg instanceof HttpRequest) {
try {
System.out.println(format("Request received on thread '%s' from '%s'", Thread.currentThread().getName(), ((NioSocketChannel)ctx.channel()).remoteAddress()));
} catch (Exception ex) {}
sendResponse(ctx);
}
}
private void sendResponse(ChannelHandlerContext ctx) {
final DefaultFullHttpResponse response = new DefaultFullHttpResponse(
HTTP_1_1,
HttpResponseStatus.OK,
Unpooled.wrappedBuffer("OK".getBytes(UTF_8)));
try {
TimeUnit.SECONDS.sleep(1);
} catch (Exception ex) {
System.out.println("Ex catched " + ex);
}
response.headers().add("Content-length", 2);
ctx.writeAndFlush(response);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
log.error("Error", cause);
ctx.close();
}
}
The client to simulate multiple concurrent connections:
public class NettyClient {
public static void main(String[] args) throws Exception {
NettyClient nettyClient = new NettyClient();
for (int i = 0; i < 100; i++) {
new Thread(() -> {
try {
nettyClient.startClient();
} catch (Exception ex) {
}
}).start();
}
TimeUnit.SECONDS.sleep(5);
}
public void startClient()
throws IOException, InterruptedException {
InetSocketAddress hostAddress = new InetSocketAddress("localhost", 8080);
SocketChannel client = SocketChannel.open(hostAddress);
System.out.println("Client... started");
String threadName = Thread.currentThread().getName();
// Send messages to server
String[] messages = new String[]
{"GET / HTTP/1.1\n" +
"Host: localhost:8080\n" +
"Connection: keep-alive\n" +
"Cache-Control: max-age=0\n" +
"Upgrade-Insecure-Requests: 1\n" +
"User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36\n" +
"Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3\n" +
"Accept-Encoding: gzip, deflate, br\n" +
"Accept-Language: ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7"};
for (int i = 0; i < messages.length; i++) {
byte[] message = new String(messages[i]).getBytes();
ByteBuffer buffer = ByteBuffer.wrap(message);
client.write(buffer);
System.out.println(messages[i]);
buffer.clear();
}
client.close();
}
}
Expected -
Our case is the blue line with the only difference that delay was set to 0.1sec instead of 1sec as I explained above. With 100 concurrent connection, I was expecting 100 RPS because there were 90k RPS with 100k concurrent connection with 0.1 delays as the graph shows.
Actual - netty handles only 8 concurrent connection at a time, wait while sleep expires, take another bunch of 8 requests and so on. As a result, it took about 13sec to complete all requests. It's obvious to handle more clients I need to allocate more threads.
But this is exactly how the classic blocking IO approach works! Here the logs on the server-side, as you can see first 8 requests handled and one second later another 8 requests
2019-07-19T12:34:10.791Z Request received on thread 'nioEventLoopGroup-3-2' from '/127.0.0.1:49466'
2019-07-19T12:34:10.791Z Request received on thread 'nioEventLoopGroup-3-1' from '/127.0.0.1:49465'
2019-07-19T12:34:10.792Z Request received on thread 'nioEventLoopGroup-3-8' from '/127.0.0.1:49464'
2019-07-19T12:34:10.793Z Request received on thread 'nioEventLoopGroup-3-7' from '/127.0.0.1:49463'
2019-07-19T12:34:10.799Z Request received on thread 'nioEventLoopGroup-3-6' from '/127.0.0.1:49462'
2019-07-19T12:34:10.802Z Request received on thread 'nioEventLoopGroup-3-3' from '/127.0.0.1:49467'
2019-07-19T12:34:10.802Z Request received on thread 'nioEventLoopGroup-3-4' from '/127.0.0.1:49461'
2019-07-19T12:34:10.803Z Request received on thread 'nioEventLoopGroup-3-5' from '/127.0.0.1:49460'
2019-07-19T12:34:11.798Z Request received on thread 'nioEventLoopGroup-3-8' from '/127.0.0.1:49552'
2019-07-19T12:34:11.798Z Request received on thread 'nioEventLoopGroup-3-1' from '/127.0.0.1:49553'
2019-07-19T12:34:11.799Z Request received on thread 'nioEventLoopGroup-3-2' from '/127.0.0.1:49554'
2019-07-19T12:34:11.801Z Request received on thread 'nioEventLoopGroup-3-6' from '/127.0.0.1:49470'
2019-07-19T12:34:11.802Z Request received on thread 'nioEventLoopGroup-3-3' from '/127.0.0.1:49475'
2019-07-19T12:34:11.805Z Request received on thread 'nioEventLoopGroup-3-7' from '/127.0.0.1:49559'
2019-07-19T12:34:11.805Z Request received on thread 'nioEventLoopGroup-3-4' from '/127.0.0.1:49468'
2019-07-19T12:34:11.806Z Request received on thread 'nioEventLoopGroup-3-5' from '/127.0.0.1:49469'
So my question is - how could netty (or something similar) with its non-blocking and event-driven architecture utilize the CPU more effectively? If we only had 1 thread per each loop group the pipeline would be as follows:
ServerChannel selection key set to ON_ACCEPT
ServerChannel accept a connection and ClientChannel selection key set to ON_READ
Worker thread read the content of this ClientChannel and pass to the chain of handlers.
Even if the ServerChannel thread accept another client connection
and put it to some sort of queue, worker thread can't do anything before all handlers in the chain finish their job. From my
the perspective of view thread can't just switch to another job since
even waiting for the response from remote DB requires CPU ticks.
"how could netty (or something similar) with its non-blocking and event-driven architecture utilize the CPU more effectively? "
It cannot.
The goal of asynchronous (non-blocking and event-driven) programming is to save core memory, when tasks are used instead of threads as units of parallel work. This allows to have millions of parallel activities instead of thousands.
CPU cycles cannot be saved automatically - it is always an intellectual job.

Java HttpsServer multi threaded

I have set up an HttpsServer in Java. All of my communication works perfectly. I set up multiple contexts, load a self-signed certificate, and even start up based on an external configuration file.
My problem now is getting multiple clients to be able to hit my secure server. To do so, I would like to somehow multi-thread the requests that come in from the HttpsServer but cannot figure out how to do so. Below is my basic HttpsConfiguration.
HttpsServer server = HttpsServer.create(new InetSocketAddress(secureConnection.getPort()), 0);
SSLContext sslContext = SSLContext.getInstance("TLS");
sslContext.init(secureConnection.getKeyManager().getKeyManagers(), secureConnection.getTrustManager().getTrustManagers(), null);
server.setHttpsConfigurator(new SecureServerConfiguration(sslContext));
server.createContext("/", new RootHandler());
server.createContext("/test", new TestHandler());
server.setExecutor(Executors.newCachedThreadPool());
server.start();
Where secureConnection is a custom class containing server setup and certificate information.
I attempted to set the executor to Executors.newCachedThreadPool() and a couple of other ones. However, they all produced the same result. Each managed the threads differently but the first request had to finish before the second could process.
I also tried writing my own Executor
public class AsyncExecutor extends ThreadPoolExecutor implements Executor
{
public static Executor create()
{
return new AsyncExecutor();
}
public AsyncExecutor()
{
super(5, 10, 10000, TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(12));
}
#Override
public void execute(Runnable process)
{
System.out.println("New Process");
Thread newProcess = new Thread(process);
newProcess.setDaemon(false);
newProcess.start();
System.out.println("Thread created");
}
}
Unfortunately, with the same result as the other Executors.
To test I am using Postman to hit the /Test endpoint which is simulating a long running task by doing a Thread.sleep(10000). While that is running, I am using my Chrome browser to hit the root endpoint. The root page does not load until the 10 second sleep is over.
Any thoughts on how to handle multiple concurrent requests to the HTTPS server?
For ease of testing, I replicated my scenario using the standard HttpServer and condensed everything into a single java program.
import java.io.IOException;
import java.io.OutputStream;
import java.net.InetSocketAddress;
import java.util.concurrent.Executors;
import com.sun.net.httpserver.HttpExchange;
import com.sun.net.httpserver.HttpHandler;
import com.sun.net.httpserver.HttpServer;
public class Example
{
private final static int PORT = 80;
private final static int BACKLOG = 10;
/**
* To test hit:
* <p><b>http://localhost/test</b></p>
* <p>This will hit the endoint with the thread sleep<br>
* Then hit:</p>
* <p><b>http://localhost</b></p>
* <p>I would expect this to come back right away. However, it does not come back until the
* first request finishes. This can be tested with only a basic browser.</p>
* #param args
* #throws Exception
*/
public static void main(String[] args) throws Exception
{
new Example().start();
}
private void start() throws Exception
{
HttpServer server = HttpServer.create(new InetSocketAddress(PORT), BACKLOG);
server.createContext("/", new RootHandler());
server.createContext("/test", new TestHandler());
server.setExecutor(Executors.newCachedThreadPool());
server.start();
System.out.println("Server Started on " + PORT);
}
class RootHandler implements HttpHandler
{
#Override
public void handle(HttpExchange httpExchange) throws IOException
{
String body = "<html>Hello World</html>";
httpExchange.sendResponseHeaders(200, body.length());
OutputStream outputStream = httpExchange.getResponseBody();
outputStream.write(body.getBytes("UTF-8"));
outputStream.close();
}
}
class TestHandler implements HttpHandler
{
#Override
public void handle(HttpExchange httpExchange) throws IOException
{
try
{
Thread.sleep(10000);
}
catch (InterruptedException e)
{
e.printStackTrace();
}
String body = "<html>Test Handled</html>";
httpExchange.sendResponseHeaders(200, body.length());
OutputStream outputStream = httpExchange.getResponseBody();
outputStream.write(body.getBytes("UTF-8"));
outputStream.close();
}
}
}
TL;DR: It's OK, just use two different browsers, or specialized tool to test it.
You original implementation is OK and it work as expected, no custom Executor needed. For each request it executes method of "shared" handler class instance. It always picks up free thread from pool, so each method call is executed in different thread.
The problem seems to be, that when you use multiple windows of the same browser to test this behavior... for some reason requests get executed in serialised way (only one at the time). Tested with latest Firefox, Chrome, Edge and Postman. Edge and Postman work as expected. Also anonymous mode of Firefox and Chrome helps.
Same local URL opened at the same time from two Chrome windows. In first the page loaded after 5s, I got Thread.sleep(5000) so that's OK. Second window loaded respons in 8,71s, so there is 3,71s delay of unknown origin.
My guess? Probably some browser internal optimization or failsafe mechanism.
Try specifying a non-zero maximum backlog (the second argument to create()):
HttpsServer server = HttpsServer.create(new InetSocketAddress(secureConnection.getPort()), 10);
I did some experiments and what works for me is:
public void handler(HttpExchange exchange) {
executor.submit(new SomeOtherHandler());
}
public class SomeOtherHandler implements Runnable {
}
where the executor is the one you created as thread pool.

Netty UDP Performance Issue

I've implemented three small UDP server. One with a plain Java DatagramSocket (threaded), one with Netty and the last one also with Netty but with a threaded message handling (because Netty doesn't support multiple threads with UDP).
After some measurements I got the following results for requests per second:
DatagramSocket ~30.000 requests/second
Netty ~1.500 requests/second
Netty (threaded): ~8.000 requests/second
The real application I have to implement must handle > 25.000 requests/second. So my question is if I make something wrong with Netty or if Netty is not designed to handle that much of connections per second?
Here are the implementations
DatagramSocket Main
public static void main(String... args) throws Exception {
final int port = Integer.parseInt(args[0]);
final int threads = Integer.parseInt(args[1]);
final int work = Integer.parseInt(args[2]);
DATAGRAM_SOCKET = new DatagramSocket(port);
for (int i = 0; i < threads; i++) {
new Thread(new Handler(work)).start();
}
}
DatagramSocket Handler
private static final class Handler implements Runnable {
private final int work;
public Handler(int work) throws SocketException {
this.work = work;
}
#Override
public void run() {
try {
while (!DATAGRAM_SOCKET.isClosed()) {
final DatagramPacket receivePacket = new DatagramPacket(new byte[1024], 1024);
DATAGRAM_SOCKET.receive(receivePacket);
final InetAddress ip = receivePacket.getAddress();
final int port = receivePacket.getPort();
final byte[] sendData = "Hey there".getBytes();
Thread.sleep(RANDOM.nextInt(work));
final DatagramPacket sendPacket = new DatagramPacket(sendData, sendData.length, ip, port);
DATAGRAM_SOCKET.send(sendPacket);
}
} catch (Exception e) {
System.out.println("ERROR: " + e.getMessage());
}
}
}
Netty Main
public static void main(String[] args) throws Exception
{
final int port = Integer.parseInt(args[0]);
final int sleep = Integer.parseInt(args[1]);
final Bootstrap bootstrap = new Bootstrap();
bootstrap.group(new NioEventLoopGroup());
bootstrap.channel(NioDatagramChannel.class);
bootstrap.handler(new MyNettyUdpHandler(sleep));
bootstrap.bind(port).sync().channel().closeFuture().sync();
}
Netty Handler (threaded)
public class MyNettyUdpHandler extends MessageToMessageDecoder<DatagramPacket> {
private final Random random = new Random(System.currentTimeMillis());
private final int sleep;
public MyNettyUdpHandler(int sleep) {
this.sleep = sleep;
}
#Override
protected void decode(ChannelHandlerContext channelHandlerContext, DatagramPacket datagramPacket, List list) throws Exception {
new Thread(() -> {
try {
Thread.sleep(random.nextInt(sleep));
} catch (InterruptedException e) {
System.out.println("ERROR while sleeping");
}
final ByteBuf buffer = Unpooled.buffer(64);
buffer.writeBytes("Hey there".getBytes());
channelHandlerContext.channel().writeAndFlush(new DatagramPacket(buffer, datagramPacket.sender()));
}).start();
}
}
The non threaded Netty Handler is the same but without the thread.
You can change your Netty decode() method like so to make it equivalent to the DatagramSocket code:
#Override
protected void decode(ChannelHandlerContext channelHandlerContext, DatagramPacket datagramPacket, List list) throws Exception {
final Channel channel = channelHandlerContext.channel();
channel.eventLoop().schedule(() -> {
final ByteBuf buffer = Unpooled.buffer(64);
buffer.writeBytes("Hey there".getBytes());
channel.writeAndFlush(new DatagramPacket(buffer, datagramPacket.sender()));
}, random.nextInt(sleep), TimeUnit.MILLISECONDS);
}
But I'm guessing the sleep() code is simulating business code you will later execute.
If that is the case make sure you don't run blocking code inside the handler.
EDIT:
To answer your question below:
You got a bit confused with the channels. You create a pipeline in the bootstrap, and you bind to some port. The returned channel is the server channel. The channel in the handlers method (your decode method in your case), is like the socket you get when you accept() in traditional socket programming. Note that port you extracted from the incoming DatagramPacket - it's roughly the same. So you send data to the client back on this channel.
The code I wrote that schedules the response is simply doing the same as what your DatagramSocket code, and the threaded netty code you wrote.
I wasn't sure why you did that, and simply assumed you have a business requirement to delay the response.
If this isn't the case, you can remove the schedule call, and your code will run much faster.
If your business logic is non-blocking, and runs in a few millis, you're done. If it's blocking, you need to try to find a non-blocking alternative, or run it in an executor, i.e. not on the event loop.
Hope this helps, even though this wasn't part of your original question. Netty is awesome, and I hate seeing bad examples and bad vibes about it so it's worth my time I guess ;)
Creating a thread in every decode() is inefficient.
You can submit the task to channel.eventLoop() as Eran said if the task is simple and won't block(In fact decode() in MesaggeToMessageDecoders is executed by the channel's EventLoop,so you need not submit it manually unless you want to shedule it).
Or you can submit the task to a ThreadPoolExecutor or EventExecutorGroup.
The latter is better because you can add listeners to the Future returned by EventExecutorGroup.submit() so you don't have to wait for the task to be completed.
My English is poor,hope these can help you.
Edit:
You can write as following,just executing the simple logic code in the EventLoop(ie.I/O thread):
#Override
protected void decode(ChannelHandlerContext channelHandlerContext, DatagramPacket datagramPacket, List list) throws Exception {
//do something simple with datagramPacket
...
final ByteBuf buffer = Unpooled.buffer(64);
buffer.writeBytes("Hey there".getBytes());
channelHandlerContext.channel().writeAndFlush(new DatagramPacket(buffer, datagramPacket.sender()));
}

Netty 4.0.23 multiple hosts single client

My question is about creating multiple TCP clients to multiple hosts using the same event loop group in Netty 4.0.23 Final, I must admit that I don't quite understand Netty 4's client threading business, especially with the loads of confusing references to Netty 3.X.X implementations I hit through my research on the internet.
with the following code, I establish a connection with a single server, and send random commands using a command queue:
public class TCPsocket {
private static final CircularFifoQueue CommandQueue = new CircularFifoQueue(20);
private final EventLoopGroup workerGroup;
private final TcpClientInitializer tcpHandlerInit; // all handlers shearable
public TCPsocket() {
workerGroup = new NioEventLoopGroup();
tcpHandlerInit = new TcpClientInitializer();
}
public void connect(String host, int port) throws InterruptedException {
try {
Bootstrap b = new Bootstrap();
b.group(workerGroup);
b.channel(NioSocketChannel.class);
b.remoteAddress(host, port);
b.handler(tcpHandlerInit);
Channel ch = b.connect().sync().channel();
ChannelFuture writeCommand = null;
for (;;) {
if (!CommandQueue.isEmpty()) {
writeCommand = ch.writeAndFlush(CommandExecute()); // commandExecute() fetches a command form the commandQueue and encodes it into a byte array
}
if (CommandQueue.isFull()) { // this will never happen ... or should never happen
ch.closeFuture().sync();
break;
}
}
if (writeCommand != null) {
writeCommand.sync();
}
} finally {
workerGroup.shutdownGracefully();
}
}
public static void main(String args[]) throws InterruptedException {
TCPsocket socket = new TCPsocket();
socket.connect("192.168.0.1", 2101);
}
}
in addition to executing commands off of the command queue, this client keeps receiving periodic responses from the serve as a response to an initial command that is sent as soon as the channel becomes active, in one of the registered handlers (in TCPClientInitializer implementation), I have:
#Override
public void channelActive(ChannelHandlerContext ctx) {
ctx.writeAndFlush(firstMessage);
System.out.println("sent first message\n");
}
which activates a feature in the connected-to server, triggering a periodic packet that is returned from the server through the life span of my application.
The problem comes when I try to use this same setup to connect to multiple servers,
by looping through a string array of known server IPs:
public static void main(String args[]) throws InterruptedException {
String[] hosts = new String[]{"192.168.0.2", "192.168.0.4", "192.168.0.5"};
TCPsocket socket = new TCPsocket();
for (String host : hosts) {
socket.connect(host, 2101);
}
}
once the first connection is established, and the server (192.168.0.2) starts sending the designated periodic packets, no other connection is attempted, which (I think) is the result of the main thread waiting on the connection to die, hence never running the second iteration of the for loop, the discussion in this question leads me to think that the connection process is started in a separate thread, allowing the main thread to continue executing, but that's not what I see here, So what is actually happening? And how would I go about implementing multiple hosts connections using the same client in Netty 4.0.23 Final?
Thanks in advance

Netty Connection Retries

Retry Connection in Netty
I am building a client socket system. The requirements are:
First attemtp to connect to the remote server
When the first attempt fails keep on trying until the server is online.
I would like to know whether there is such feature in netty to do it or how best can I solve that.
Thank you very much
This is the code snippet I am struggling with:
protected void connect() throws Exception {
this.bootstrap = new ClientBootstrap(new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()));
// Configure the event pipeline factory.
bootstrap.setPipelineFactory(new SmpPipelineFactory());
bootstrap.setOption("writeBufferHighWaterMark", 10 * 64 * 1024);
bootstrap.setOption("sendBufferSize", 1048576);
bootstrap.setOption("receiveBufferSize", 1048576);
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
// Make a new connection.
final ChannelFuture connectFuture = bootstrap
.connect(new InetSocketAddress(config.getRemoteAddr(), config
.getRemotePort()));
channel = connectFuture.getChannel();
connectFuture.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future)
throws Exception {
if (connectFuture.isSuccess()) {
// Connection attempt succeeded:
// Begin to accept incoming traffic.
channel.setReadable(true);
} else {
// Close the connection if the connection attempt has
// failed.
channel.close();
logger.info("Unable to Connect to the Remote Socket server");
}
}
});
}
Assuming netty 3.x the simplest example would be:
// Configure the client.
ClientBootstrap bootstrap = new ClientBootstrap(
new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()));
ChannelFuture future = null;
while (true)
{
future = bootstrap.connect(new InetSocketAddress("127.0.0.1", 80));
future.awaitUninterruptibly();
if (future.isSuccess())
{
break;
}
}
Obviously you'd want to have your own logic for the loop that set a max number of tries, etc. Netty 4.x has a slightly different bootstrap but the logic is the same. This is also synchronous, blocking, and ignores InterruptedException; in a real application you might register a ChannelFutureListener with the Future and be notified when the Future completes.
Add after OP edited question:
You have a ChannelFutureListener that is getting notified. If you want to then retry the connection you're going to have to either have that listener hold a reference to the bootstrap, or communicate back to your main thread that the connection attempt failed and have it retry the operation. If you have the listener do it (which is the simplest way) be aware that you need to limit the number of retries to prevent an infinite recursion - it's being executed in the context of the Netty worker thread. If you exhaust your retries, again, you'll need to communicate that back to your main thread; you could do that via a volatile variable, or the observer pattern could be used.
When dealing with async you really have to think concurrently. There's a number of ways to skin that particular cat.
Thank you Brian Roach. The connected variable is a volatile and can be accessed outside the code or further processing.
final InetSocketAddress sockAddr = new InetSocketAddress(
config.getRemoteAddr(), config.getRemotePort());
final ChannelFuture connectFuture = bootstrap
.connect(sockAddr);
channel = connectFuture.getChannel();
connectFuture.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future)
throws Exception {
if (future.isSuccess()) {
// Connection attempt succeeded:
// Begin to accept incoming traffic.
channel.setReadable(true);
connected = true;
} else {
// Close the connection if the connection attempt has
// failed.
channel.close();
if(!connected){
logger.debug("Attempt to connect within " + ((double)frequency/(double)1000) + " seconds");
try {
Thread.sleep(frequency);
} catch (InterruptedException e) {
logger.error(e.getMessage());
}
bootstrap.connect(sockAddr).addListener(this);
}
}
}
});

Categories