Netty - how to survive DDOS? - java

I'm using netty 4.1 as NIO socket server for MMORPG game. It was running perfectly for years but recently we are suffering from DDOS attacks. I was fighting it for a long time but currently, I don't have any more ideas on how could I improve it. Ddoser is spamming with new connections from thousands of ips from all over the world. It's difficult to cut it on the network level because attacks look very similar to normal players. Attacks are not very big compared to attacks on HTTP servers but big enough to crash our game.
How i'm using netty:
public void startServer() {
bossGroup = new NioEventLoopGroup(1);
workerGroup = new NioEventLoopGroup();
try {
int timeout = (Settings.SOCKET_TIMEOUT*1000);
bootstrap = new ServerBootstrap();
int bufferSize = 65536;
bootstrap.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childOption(ChannelOption.SO_KEEPALIVE, true)
.childOption(ChannelOption.SO_TIMEOUT, timeout)
.childOption(ChannelOption.SO_RCVBUF, bufferSize)
.childOption(ChannelOption.SO_SNDBUF, bufferSize)
.handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new CustomInitalizer(sslCtx));
ChannelFuture bind = bootstrap.bind(DrServerAdmin.port);
bossChannel = bind.sync();
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
Initalizer:
public class CustomInitalizer extends ChannelInitializer<SocketChannel> {
public static DefaultEventExecutorGroup normalGroup = new DefaultEventExecutorGroup(16);
public static DefaultEventExecutorGroup loginGroup = new DefaultEventExecutorGroup(8);
public static DefaultEventExecutorGroup commandsGroup = new DefaultEventExecutorGroup(4);
private final SslContext sslCtx;
public CustomInitalizer(SslContext sslCtx) {
this.sslCtx = sslCtx;
}
#Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
if (sslCtx != null) {
pipeline.addLast(sslCtx.newHandler(ch.alloc()));
}
pipeline.addLast(new CustomFirewall()); //it is AbstractRemoteAddressFilter<InetSocketAddress>
int limit = 32768;
pipeline.addLast(new DelimiterBasedFrameDecoder(limit, Delimiters.nulDelimiter()));
pipeline.addLast("decoder", new StringDecoder(CharsetUtil.UTF_8));
pipeline.addLast("encoder", new StringEncoder(CharsetUtil.UTF_8));
pipeline.addLast(new CustomReadTimeoutHandler(Settings.SOCKET_TIMEOUT));
int id = DrServerNetty.getDrServer().getIdClient();
CustomHandler normalHandler = new CustomHandler();
FlashClientNetty client = new FlashClientNetty(normalHandler,id);
normalHandler.setClient(client);
pipeline.addLast(normalGroup,"normalHandler",normalHandler);
CustomLoginHandler loginHandler = new CustomLoginHandler(client);
pipeline.addLast(loginGroup,"loginHandler",loginHandler);
CustomCommandsHandler commandsHandler = new CustomCommandsHandler(loginHandler.client);
pipeline.addLast(commandsGroup, "commandsHandler", commandsHandler);
}
}
I'm using 5 groups:
bootstrap bossGroup - for new connections
bootstrap workerGroup - for delivering messages
normalGroup - for most messages
loginGroup - for heavy login process
commands group - for some heavy logic
I'm monitoring the number of new connections and messages so I can immediately find out if there is an attack going. During the attack I'm not accepting new connections anymore: I'm returning false in the custom firewall ( AbstractRemoteAddressFilter ).
protected boolean accept(ChannelHandlerContext ctx, InetSocketAddress remoteAddress) throws Exception {
if(ddosDetected())
return false;
else
return true;
}
But even that I'm dropping new connections right away my workgroup is getting overloaded. PendingTasks for worker group (all other groups are fine) are growing which causes longer and longer communications for normal players and finally, they get kicked by socket_timeouts. I'm not sure why is it happen. During normal server usage, the busiest groups are login and normal group. On network level server is fine - it's using just ~10% of its bandwidth limit. CPU and RAM usage also isn't very high during the attack. But after a few minutes of such an attack, all my players are kicked out from the game and are not able to connect anymore.
Is there any better way to instantly drop all incoming connections and protect users that are aready connected?

I think you will need to "fix this" on the kernel level via for example iptables. Otherwise you can only close the connection after you already accept it which sounds not good enough in this case.

Related

socket taking to long in fin_wait_1

Getting this strange behavior and dont know if SO_LINGER(0) is the "best way" to solve it.
This only happens when i have this 2 factors together:
OS - windows
lower internet bandwidth
When this 2 factors combine, my tcp connections (client-side), right after calling close() get stuck in FIN_WAIT and then TIME_WAIT for a long, long time
(In this network, the exact same app running in other OS behaves as expected. The same applies to windows but in a better network connection)
In this particular case i just bother with fin_wait _(1 or 2) states. My first thougth is to set so_linger to 0, but i'm not completely convinced that abortive close is the right (or the only) option here.
Any idea on how can i handle this ? Is there other way to force windows to close that connections (programmatically) ?
( EDIT )
EventLoopGroup group = new NioEventLoopGroup(); try {
Bootstrap b = new Bootstrap();
b.group(group)
.channel(NioSocketChannel.class)
.option(ChannelOption.TCP_NODELAY, true)
.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 5000)
// .option(ChannelOption.SO_LINGER, 0) ( ?? )
.handler(new ChannelInitializer<SocketChannel>() {
#Override
protected void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
p.addLast(new MyHandler());
p.addLast(new WriteTimeoutHandler(8, TimeUnit.SECONDS));
ch.config()
.setSendBufferSize(TCP_BUF_SIZE)
.setReceiveBufferSize(TCP_BUF_SIZE);
}
});
ChannelFuture channelFuture = bootstrap.connect(host, port).sync();
channelFuture.channel().closeFuture().sync();
} finally {
group.shutdownGracefully(); }

Netty Add Channel to ServerBootstrap

I have a ServerBootstrap accepting data from clients. Most of them are from any endpoint that connects to it, however I also want to handle data coming from a specific endpoint.
I'm reading and writing strings from n+1 connections basically. If the one specific connection ever closes, I would need to reopen it again.
Currently I'm trying to get a Bootstrap connected to the specific endpoint, and a ServerBootstrap handling all of the incoming connections, but the sync() that starts one of the Bootstraps blocks the rest of the application and I can't run the other one.
Or is it possible to just create a channel from scratch, connect to it, and add it to the EventLoopGroup?
Here's an example of what I have so far. Currently startServer() blocks at channelfuture.channel().closeFuture().sync()
private Channel mChannel;
private EventLoopGroup mListeningGroup;
private EventLoopGroup mSpeakingGroup;
public void startServer() {
try {
ServerBootstrap bootstrap = new ServerBootstrap()
.group(mListeningGroup, mSpeakingGroup)
.channel(NioServerSocketChannel.class)
.option(ChannelOption.SO_BACKLOG, 1024)
.childOption(ChannelOption.SO_KEEPALIVE, true)
.childHandler(new ServerInitializer());
ChannelFuture channelFuture = bootstrap.bind(mListeningPort).sync();
channelFuture.channel().closeFuture().sync();
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
mListeningGroup.shutdownGracefully();
mSpeakingGroup.shutdownGracefully();
}
}
public void startClient() throws InterruptedException {
Bootstrap bootstrap = new Bootstrap()
.group(mSpeakingGroup)
.channel(NioSocketChannel.class)
.option(ChannelOption.SO_BACKLOG, 1024)
.option(ChannelOption.TCP_NODELAY, true)
.option(ChannelOption.SO_KEEPALIVE, true)
.handler(new ClientInitializer());
ChannelFuture future = bootstrap.connect(mAddress,mPort).sync();
mChannel = future.channel();
mChannel.closeFuture().addListener((ChannelFutureListener) futureListener -> mChannel = null).sync();
}
Once data is read by any of the n+1 sockets it puts it's message into a PriorityQueue and a while loops continuously pops off the queue and writes the data to every Channel. Does anyone have any ideas in regards to the best way to approach this?

Netty 4.0.23 multiple hosts single client

My question is about creating multiple TCP clients to multiple hosts using the same event loop group in Netty 4.0.23 Final, I must admit that I don't quite understand Netty 4's client threading business, especially with the loads of confusing references to Netty 3.X.X implementations I hit through my research on the internet.
with the following code, I establish a connection with a single server, and send random commands using a command queue:
public class TCPsocket {
private static final CircularFifoQueue CommandQueue = new CircularFifoQueue(20);
private final EventLoopGroup workerGroup;
private final TcpClientInitializer tcpHandlerInit; // all handlers shearable
public TCPsocket() {
workerGroup = new NioEventLoopGroup();
tcpHandlerInit = new TcpClientInitializer();
}
public void connect(String host, int port) throws InterruptedException {
try {
Bootstrap b = new Bootstrap();
b.group(workerGroup);
b.channel(NioSocketChannel.class);
b.remoteAddress(host, port);
b.handler(tcpHandlerInit);
Channel ch = b.connect().sync().channel();
ChannelFuture writeCommand = null;
for (;;) {
if (!CommandQueue.isEmpty()) {
writeCommand = ch.writeAndFlush(CommandExecute()); // commandExecute() fetches a command form the commandQueue and encodes it into a byte array
}
if (CommandQueue.isFull()) { // this will never happen ... or should never happen
ch.closeFuture().sync();
break;
}
}
if (writeCommand != null) {
writeCommand.sync();
}
} finally {
workerGroup.shutdownGracefully();
}
}
public static void main(String args[]) throws InterruptedException {
TCPsocket socket = new TCPsocket();
socket.connect("192.168.0.1", 2101);
}
}
in addition to executing commands off of the command queue, this client keeps receiving periodic responses from the serve as a response to an initial command that is sent as soon as the channel becomes active, in one of the registered handlers (in TCPClientInitializer implementation), I have:
#Override
public void channelActive(ChannelHandlerContext ctx) {
ctx.writeAndFlush(firstMessage);
System.out.println("sent first message\n");
}
which activates a feature in the connected-to server, triggering a periodic packet that is returned from the server through the life span of my application.
The problem comes when I try to use this same setup to connect to multiple servers,
by looping through a string array of known server IPs:
public static void main(String args[]) throws InterruptedException {
String[] hosts = new String[]{"192.168.0.2", "192.168.0.4", "192.168.0.5"};
TCPsocket socket = new TCPsocket();
for (String host : hosts) {
socket.connect(host, 2101);
}
}
once the first connection is established, and the server (192.168.0.2) starts sending the designated periodic packets, no other connection is attempted, which (I think) is the result of the main thread waiting on the connection to die, hence never running the second iteration of the for loop, the discussion in this question leads me to think that the connection process is started in a separate thread, allowing the main thread to continue executing, but that's not what I see here, So what is actually happening? And how would I go about implementing multiple hosts connections using the same client in Netty 4.0.23 Final?
Thanks in advance

Exception during Netty server shutdown

I have application running on Tomcat. I use Netty 4 for websocket handling.
Netty server run in ServletContextListener in contextInitialized method and stop in contextDestroyed.
This my class for Netty server:
public class WebSocketServer {
private final int port;
private final EventLoopGroup bossGroup;
private final EventLoopGroup workerGroup;
private Channel serverChannel;
public WebSocketServer(int port) {
this.port = port;
bossGroup = new NioEventLoopGroup(1);
workerGroup = new NioEventLoopGroup();
}
public void run() throws Exception {
final ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class)
.childHandler(new WebSocketServerInitializer());
serverChannel = b.bind(port).sync().channel();
System.out.println("Web socket server started at port " + port + '.');
System.out
.println("Open your browser and navigate to http://localhost:"
+ port + '/');
}
public void stop() {
if (serverChannel != null) {
ChannelFuture chFuture = serverChannel.close();
chFuture.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
shutdownWorkers();
}
});
} else {
shutdownWorkers();
}
}
private void shutdownWorkers() {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
It's work fine after running, but when I try stop Tomcat I get exception:
INFO: Illegal access: this web application instance has been stopped already. Could not load io.netty.util.concurrent.DefaultPromise$3. The eventual following stack trace is caused by an error thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access, and has no functional impact.
java.lang.IllegalStateException
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1610)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1569)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:592)
at io.netty.util.concurrent.DefaultPromise.setSuccess(DefaultPromise.java:403)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:139)
at java.lang.Thread.run(Thread.java:662)
After Tomcat hangs up.
What can be reason?
I assume you call shutdownWorkers() somewhere from Servlet.destroy() or use some other mechanism that ensures your Server goes down when servlet stops / unloads.
Then you need to do
void shutdownWorkers() {
Future fb = trbossGroup.shutdownGracefully();
Future fw = workerGroup.shutdownGracefully();
try {
fb.await();
fw.await();
} catch (InterruptedException ignore) {}
}
It is because shutdownGracefully() returns a Future, and, well, without waiting for it to come, you leave things that try to close the connections in very stressful environment. It also makes sense to first initiate all shutdown's and then wait till futures are awailable, this way it all runs in parallel and happens faster.
It fixed the issue for me. Obviously, you can make it nicer to your system without swallowing InterruptedException and wrapping each call in a nice method and putting reasonable timeout for each await(). Nice excercise in general, but in reality most probably you wouldn't care at this point in your code.
Side note: and yes, for WebSockets you will be better off with Tomcat's native, standards-compliant and robust implementation. Netty is awseome for many other things, but would be a wrong tool here.

Netty Connection Retries

Retry Connection in Netty
I am building a client socket system. The requirements are:
First attemtp to connect to the remote server
When the first attempt fails keep on trying until the server is online.
I would like to know whether there is such feature in netty to do it or how best can I solve that.
Thank you very much
This is the code snippet I am struggling with:
protected void connect() throws Exception {
this.bootstrap = new ClientBootstrap(new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()));
// Configure the event pipeline factory.
bootstrap.setPipelineFactory(new SmpPipelineFactory());
bootstrap.setOption("writeBufferHighWaterMark", 10 * 64 * 1024);
bootstrap.setOption("sendBufferSize", 1048576);
bootstrap.setOption("receiveBufferSize", 1048576);
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
// Make a new connection.
final ChannelFuture connectFuture = bootstrap
.connect(new InetSocketAddress(config.getRemoteAddr(), config
.getRemotePort()));
channel = connectFuture.getChannel();
connectFuture.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future)
throws Exception {
if (connectFuture.isSuccess()) {
// Connection attempt succeeded:
// Begin to accept incoming traffic.
channel.setReadable(true);
} else {
// Close the connection if the connection attempt has
// failed.
channel.close();
logger.info("Unable to Connect to the Remote Socket server");
}
}
});
}
Assuming netty 3.x the simplest example would be:
// Configure the client.
ClientBootstrap bootstrap = new ClientBootstrap(
new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()));
ChannelFuture future = null;
while (true)
{
future = bootstrap.connect(new InetSocketAddress("127.0.0.1", 80));
future.awaitUninterruptibly();
if (future.isSuccess())
{
break;
}
}
Obviously you'd want to have your own logic for the loop that set a max number of tries, etc. Netty 4.x has a slightly different bootstrap but the logic is the same. This is also synchronous, blocking, and ignores InterruptedException; in a real application you might register a ChannelFutureListener with the Future and be notified when the Future completes.
Add after OP edited question:
You have a ChannelFutureListener that is getting notified. If you want to then retry the connection you're going to have to either have that listener hold a reference to the bootstrap, or communicate back to your main thread that the connection attempt failed and have it retry the operation. If you have the listener do it (which is the simplest way) be aware that you need to limit the number of retries to prevent an infinite recursion - it's being executed in the context of the Netty worker thread. If you exhaust your retries, again, you'll need to communicate that back to your main thread; you could do that via a volatile variable, or the observer pattern could be used.
When dealing with async you really have to think concurrently. There's a number of ways to skin that particular cat.
Thank you Brian Roach. The connected variable is a volatile and can be accessed outside the code or further processing.
final InetSocketAddress sockAddr = new InetSocketAddress(
config.getRemoteAddr(), config.getRemotePort());
final ChannelFuture connectFuture = bootstrap
.connect(sockAddr);
channel = connectFuture.getChannel();
connectFuture.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future)
throws Exception {
if (future.isSuccess()) {
// Connection attempt succeeded:
// Begin to accept incoming traffic.
channel.setReadable(true);
connected = true;
} else {
// Close the connection if the connection attempt has
// failed.
channel.close();
if(!connected){
logger.debug("Attempt to connect within " + ((double)frequency/(double)1000) + " seconds");
try {
Thread.sleep(frequency);
} catch (InterruptedException e) {
logger.error(e.getMessage());
}
bootstrap.connect(sockAddr).addListener(this);
}
}
}
});

Categories