Netty Add Channel to ServerBootstrap - java

I have a ServerBootstrap accepting data from clients. Most of them are from any endpoint that connects to it, however I also want to handle data coming from a specific endpoint.
I'm reading and writing strings from n+1 connections basically. If the one specific connection ever closes, I would need to reopen it again.
Currently I'm trying to get a Bootstrap connected to the specific endpoint, and a ServerBootstrap handling all of the incoming connections, but the sync() that starts one of the Bootstraps blocks the rest of the application and I can't run the other one.
Or is it possible to just create a channel from scratch, connect to it, and add it to the EventLoopGroup?
Here's an example of what I have so far. Currently startServer() blocks at channelfuture.channel().closeFuture().sync()
private Channel mChannel;
private EventLoopGroup mListeningGroup;
private EventLoopGroup mSpeakingGroup;
public void startServer() {
try {
ServerBootstrap bootstrap = new ServerBootstrap()
.group(mListeningGroup, mSpeakingGroup)
.channel(NioServerSocketChannel.class)
.option(ChannelOption.SO_BACKLOG, 1024)
.childOption(ChannelOption.SO_KEEPALIVE, true)
.childHandler(new ServerInitializer());
ChannelFuture channelFuture = bootstrap.bind(mListeningPort).sync();
channelFuture.channel().closeFuture().sync();
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
mListeningGroup.shutdownGracefully();
mSpeakingGroup.shutdownGracefully();
}
}
public void startClient() throws InterruptedException {
Bootstrap bootstrap = new Bootstrap()
.group(mSpeakingGroup)
.channel(NioSocketChannel.class)
.option(ChannelOption.SO_BACKLOG, 1024)
.option(ChannelOption.TCP_NODELAY, true)
.option(ChannelOption.SO_KEEPALIVE, true)
.handler(new ClientInitializer());
ChannelFuture future = bootstrap.connect(mAddress,mPort).sync();
mChannel = future.channel();
mChannel.closeFuture().addListener((ChannelFutureListener) futureListener -> mChannel = null).sync();
}
Once data is read by any of the n+1 sockets it puts it's message into a PriorityQueue and a while loops continuously pops off the queue and writes the data to every Channel. Does anyone have any ideas in regards to the best way to approach this?

Related

Netty - how to survive DDOS?

I'm using netty 4.1 as NIO socket server for MMORPG game. It was running perfectly for years but recently we are suffering from DDOS attacks. I was fighting it for a long time but currently, I don't have any more ideas on how could I improve it. Ddoser is spamming with new connections from thousands of ips from all over the world. It's difficult to cut it on the network level because attacks look very similar to normal players. Attacks are not very big compared to attacks on HTTP servers but big enough to crash our game.
How i'm using netty:
public void startServer() {
bossGroup = new NioEventLoopGroup(1);
workerGroup = new NioEventLoopGroup();
try {
int timeout = (Settings.SOCKET_TIMEOUT*1000);
bootstrap = new ServerBootstrap();
int bufferSize = 65536;
bootstrap.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childOption(ChannelOption.SO_KEEPALIVE, true)
.childOption(ChannelOption.SO_TIMEOUT, timeout)
.childOption(ChannelOption.SO_RCVBUF, bufferSize)
.childOption(ChannelOption.SO_SNDBUF, bufferSize)
.handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new CustomInitalizer(sslCtx));
ChannelFuture bind = bootstrap.bind(DrServerAdmin.port);
bossChannel = bind.sync();
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
Initalizer:
public class CustomInitalizer extends ChannelInitializer<SocketChannel> {
public static DefaultEventExecutorGroup normalGroup = new DefaultEventExecutorGroup(16);
public static DefaultEventExecutorGroup loginGroup = new DefaultEventExecutorGroup(8);
public static DefaultEventExecutorGroup commandsGroup = new DefaultEventExecutorGroup(4);
private final SslContext sslCtx;
public CustomInitalizer(SslContext sslCtx) {
this.sslCtx = sslCtx;
}
#Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
if (sslCtx != null) {
pipeline.addLast(sslCtx.newHandler(ch.alloc()));
}
pipeline.addLast(new CustomFirewall()); //it is AbstractRemoteAddressFilter<InetSocketAddress>
int limit = 32768;
pipeline.addLast(new DelimiterBasedFrameDecoder(limit, Delimiters.nulDelimiter()));
pipeline.addLast("decoder", new StringDecoder(CharsetUtil.UTF_8));
pipeline.addLast("encoder", new StringEncoder(CharsetUtil.UTF_8));
pipeline.addLast(new CustomReadTimeoutHandler(Settings.SOCKET_TIMEOUT));
int id = DrServerNetty.getDrServer().getIdClient();
CustomHandler normalHandler = new CustomHandler();
FlashClientNetty client = new FlashClientNetty(normalHandler,id);
normalHandler.setClient(client);
pipeline.addLast(normalGroup,"normalHandler",normalHandler);
CustomLoginHandler loginHandler = new CustomLoginHandler(client);
pipeline.addLast(loginGroup,"loginHandler",loginHandler);
CustomCommandsHandler commandsHandler = new CustomCommandsHandler(loginHandler.client);
pipeline.addLast(commandsGroup, "commandsHandler", commandsHandler);
}
}
I'm using 5 groups:
bootstrap bossGroup - for new connections
bootstrap workerGroup - for delivering messages
normalGroup - for most messages
loginGroup - for heavy login process
commands group - for some heavy logic
I'm monitoring the number of new connections and messages so I can immediately find out if there is an attack going. During the attack I'm not accepting new connections anymore: I'm returning false in the custom firewall ( AbstractRemoteAddressFilter ).
protected boolean accept(ChannelHandlerContext ctx, InetSocketAddress remoteAddress) throws Exception {
if(ddosDetected())
return false;
else
return true;
}
But even that I'm dropping new connections right away my workgroup is getting overloaded. PendingTasks for worker group (all other groups are fine) are growing which causes longer and longer communications for normal players and finally, they get kicked by socket_timeouts. I'm not sure why is it happen. During normal server usage, the busiest groups are login and normal group. On network level server is fine - it's using just ~10% of its bandwidth limit. CPU and RAM usage also isn't very high during the attack. But after a few minutes of such an attack, all my players are kicked out from the game and are not able to connect anymore.
Is there any better way to instantly drop all incoming connections and protect users that are aready connected?
I think you will need to "fix this" on the kernel level via for example iptables. Otherwise you can only close the connection after you already accept it which sounds not good enough in this case.

socket taking to long in fin_wait_1

Getting this strange behavior and dont know if SO_LINGER(0) is the "best way" to solve it.
This only happens when i have this 2 factors together:
OS - windows
lower internet bandwidth
When this 2 factors combine, my tcp connections (client-side), right after calling close() get stuck in FIN_WAIT and then TIME_WAIT for a long, long time
(In this network, the exact same app running in other OS behaves as expected. The same applies to windows but in a better network connection)
In this particular case i just bother with fin_wait _(1 or 2) states. My first thougth is to set so_linger to 0, but i'm not completely convinced that abortive close is the right (or the only) option here.
Any idea on how can i handle this ? Is there other way to force windows to close that connections (programmatically) ?
( EDIT )
EventLoopGroup group = new NioEventLoopGroup(); try {
Bootstrap b = new Bootstrap();
b.group(group)
.channel(NioSocketChannel.class)
.option(ChannelOption.TCP_NODELAY, true)
.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 5000)
// .option(ChannelOption.SO_LINGER, 0) ( ?? )
.handler(new ChannelInitializer<SocketChannel>() {
#Override
protected void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
p.addLast(new MyHandler());
p.addLast(new WriteTimeoutHandler(8, TimeUnit.SECONDS));
ch.config()
.setSendBufferSize(TCP_BUF_SIZE)
.setReceiveBufferSize(TCP_BUF_SIZE);
}
});
ChannelFuture channelFuture = bootstrap.connect(host, port).sync();
channelFuture.channel().closeFuture().sync();
} finally {
group.shutdownGracefully(); }

Multiple Channels with Different Service Paths

[I am using Netty-Websokcet]
I have a use case where different service paths should be connected to the same port. I tried so many different ways, reasons I couldn't get the work done was,
In ServerBootstrap class there is only one place for ChannelHandler therefore I cannot add multiple child handlers in ServerBootstrap with different service paths
In ServerBootstrap class it is not possible to create multiple groups
This is how my init channel looks like,
#Override
protected void initChannel(SocketChannel socketChannel) throws Exception {
logger.debug(1, "Initializing the SocketChannel : {}", socketChannel.id());
socketChannel.pipeline().addLast(
new HttpRequestDecoder(),
new HttpObjectAggregator(maxPayloadSize),
new HttpResponseEncoder(),
new IdleStateHandler(0, 0, listenerConfig.getSocketTimeout(),
TimeUnit.SECONDS),
new WebSocketServerProtocolHandler(ingressConfig.getURI().getPath()), // (A)
new WebSocketServerCompressionHandler(),
new WebSocketIO(listenerConfig, manager), // a handler
new WebSocketMessageListener(messageReceiver, manager) // a handler
);
logger.debug(2, "Successfully initialized the Socket Channel : {}", socketChannel.id());
}
This code line (A) registers a handler with the given service path (service path is ingressConfig.getURI().getPath())
int maxPayloadSize = listenerConfig.getMaxPayloadSize();
try {
bossGroup = new NioEventLoopGroup(listenerConfig.getBossThreadCount());
workerGroup = new NioEventLoopGroup(listenerConfig.getWorkerThreadCount());
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new WebSocketListenerInitializer(messageReceiver, maxPayloadSize, listenerConfig,
ingressConfig))
.option(ChannelOption.SO_BACKLOG, 128)
.childOption(ChannelOption.SO_KEEPALIVE, true);
ChannelFuture channelFuture = bootstrap.bind(port);
channelFuture.sync();
channel = channelFuture.channel();
if (channelFuture.isSuccess()) {
logger.info(1, "WebSocket listener started on port : {} successfully", port);
} else {
logger.error(2, "Failed to start WebSocket server on port : {}", port,
channelFuture.cause());
throw new TransportException("Failed to start WebSocket server", channelFuture.cause());
}
} catch (InterruptedException ex) {
logger.error(1, "Interrupted Exception from : {}", WebSocketListener.class);
throw new TransportException("Interrupted Exception", ex);
}
Can anyone suggest me a way how to do this?

How to get callback on netty client after server establish sslConnection

My server expects clients to connect on two different sockets and order of connection is important. Client must connect on first channel ch1 and after SSL handshake server takes time to create user session. In Handler it looks like this:
#Override
public void channelRegistered(ChannelHandlerContext ctx) throws Exception {
log.debug("channelRegistered);
ctx.pipeline().get(SslHandler.class).handshakeFuture().addListener(
future -> initSession(ctx));
}
InitSession method create internal objects to track client. Only after initSession is complete server expect connection on second channel ch2 from this client.
I'm stuck with writing client code to perform this connection order.
Naive way is easy:
public static void main(String[] args) throws Exception {
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
SslContext sslContext = provideSslContext();
Bootstrap b = new Bootstrap();
b.group(workerGroup)
.channel(NioSocketChannel.class)
.handler(new Channelinitializer(sslContext));
Channel ch1 = b.connect("localhost", 8008).sync().channel();
Thread.sleep(1000);
Bootstrap b1 = new Bootstrap();
b1.group(workerGroup)
.channel(NioSocketChannel.class)
.handler(new Channelinitializer(sslContext));
Channel ch2 = b1.connect("localhost", 8009).sync().channel();
}finally {
workerGroup.shutdownGracefully();
}
}
After ch1 connect we just wait for some time to be sure that server perform all actions required.
How robust solution should look like? Is there any callback I can use to trigger ch2 connection? I'm using netty 4.0.36.Final
You can just retrieve the SslHandler from the pipeline and wait on the handshakeFuture or add a listener to it. Then when it is complete do the second connect.
Something like:
SslContext sslContext = provideSslContext();
Bootstrap b = new Bootstrap();
b.group(workerGroup)
.channel(NioSocketChannel.class)
.handler(new Channelinitializer(sslContext));
Channel ch1 = b.connect("localhost", 8008).sync().channel();
ch1.pipeline.get(SslHandler.class).handshakeFuture().sync()
Bootstrap b1 = new Bootstrap();
b1.group(workerGroup)
.channel(NioSocketChannel.class)
.handler(new Channelinitializer(sslContext));
Channel ch2 = b1.connect("localhost", 8009).sync().channel();

Netty Connection Retries

Retry Connection in Netty
I am building a client socket system. The requirements are:
First attemtp to connect to the remote server
When the first attempt fails keep on trying until the server is online.
I would like to know whether there is such feature in netty to do it or how best can I solve that.
Thank you very much
This is the code snippet I am struggling with:
protected void connect() throws Exception {
this.bootstrap = new ClientBootstrap(new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()));
// Configure the event pipeline factory.
bootstrap.setPipelineFactory(new SmpPipelineFactory());
bootstrap.setOption("writeBufferHighWaterMark", 10 * 64 * 1024);
bootstrap.setOption("sendBufferSize", 1048576);
bootstrap.setOption("receiveBufferSize", 1048576);
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
// Make a new connection.
final ChannelFuture connectFuture = bootstrap
.connect(new InetSocketAddress(config.getRemoteAddr(), config
.getRemotePort()));
channel = connectFuture.getChannel();
connectFuture.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future)
throws Exception {
if (connectFuture.isSuccess()) {
// Connection attempt succeeded:
// Begin to accept incoming traffic.
channel.setReadable(true);
} else {
// Close the connection if the connection attempt has
// failed.
channel.close();
logger.info("Unable to Connect to the Remote Socket server");
}
}
});
}
Assuming netty 3.x the simplest example would be:
// Configure the client.
ClientBootstrap bootstrap = new ClientBootstrap(
new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()));
ChannelFuture future = null;
while (true)
{
future = bootstrap.connect(new InetSocketAddress("127.0.0.1", 80));
future.awaitUninterruptibly();
if (future.isSuccess())
{
break;
}
}
Obviously you'd want to have your own logic for the loop that set a max number of tries, etc. Netty 4.x has a slightly different bootstrap but the logic is the same. This is also synchronous, blocking, and ignores InterruptedException; in a real application you might register a ChannelFutureListener with the Future and be notified when the Future completes.
Add after OP edited question:
You have a ChannelFutureListener that is getting notified. If you want to then retry the connection you're going to have to either have that listener hold a reference to the bootstrap, or communicate back to your main thread that the connection attempt failed and have it retry the operation. If you have the listener do it (which is the simplest way) be aware that you need to limit the number of retries to prevent an infinite recursion - it's being executed in the context of the Netty worker thread. If you exhaust your retries, again, you'll need to communicate that back to your main thread; you could do that via a volatile variable, or the observer pattern could be used.
When dealing with async you really have to think concurrently. There's a number of ways to skin that particular cat.
Thank you Brian Roach. The connected variable is a volatile and can be accessed outside the code or further processing.
final InetSocketAddress sockAddr = new InetSocketAddress(
config.getRemoteAddr(), config.getRemotePort());
final ChannelFuture connectFuture = bootstrap
.connect(sockAddr);
channel = connectFuture.getChannel();
connectFuture.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future)
throws Exception {
if (future.isSuccess()) {
// Connection attempt succeeded:
// Begin to accept incoming traffic.
channel.setReadable(true);
connected = true;
} else {
// Close the connection if the connection attempt has
// failed.
channel.close();
if(!connected){
logger.debug("Attempt to connect within " + ((double)frequency/(double)1000) + " seconds");
try {
Thread.sleep(frequency);
} catch (InterruptedException e) {
logger.error(e.getMessage());
}
bootstrap.connect(sockAddr).addListener(this);
}
}
}
});

Categories