Is it possible to have netty lazy initialized via systemd/inetd, using inherited server socket channel?
We used this in our old Jetty based server, where Jetty would call Java's System.inheritedChannel() to get the socket created via systemd on lazy initializations.
I have searched a lot, and all I found is a Jira ticket that says it supposedly supports in version 4: https://issues.jboss.org/browse/NETTY-309.
But this jira ticket has no example, and I couldn't find any documentation, nor anything on the source code, that could point me to how to achieve this in netty.
Any help would be appreciated.
Thanks
EDIT:
Just to make it more clear, what I want to know if is it possible to have my java application socket-activated by systemd, and then somehow pass the socket reference to netty.
EDIT 2:
Here is an approach suggested by Norman Mayer, but it actually fails with the exception below:
public class MyServerBootStrap {
private ServiceContext ctx;
private Config config;
private Collection<Channel> channels;
private Collection<Connector> connectors;
public MyServerBootStrap(List<Connector> connectors) {
this.ctx = ApplicationContext.getInstance();
this.config = ctx.getMainConfig();
this.connectors = connectors;
this.channels = new ArrayList<>(connectors.size());
}
public void run(Connector connector) throws RuntimeException, IOException, InterruptedException {
EventLoopGroup bossGroup = new NioEventLoopGroup(config.getInt("http_acceptor_threads", 0));
EventLoopGroup workerGroup = new NioEventLoopGroup(config.getIntError("http_server_threads"));
final SocketAddress addr;
final ChannelFactory<ServerChannel> channelFactory;
if (connector.geEndpoint().isInherited()) {
System.out.println(
"Trying to bootstrap inherited channel: " + connector.geEndpoint().getDescription());
ServerSocketChannel channel = (ServerSocketChannel) System.inheritedChannel();
addr = channel.getLocalAddress();
System.out.println("Channel localAddress(): " + addr);
channelFactory = new MyChannelFactory(channel);
} else {
System.out.println(
"Trying to bootstrap regular channel: " + connector.geEndpoint().getDescription());
addr = connector.geEndpoint().getSocketAdress();
channelFactory = new MyChannelFactory(null);
}
ServerBootstrap b = new ServerBootstrap();
b
.group(bossGroup, workerGroup)
.localAddress(addr)
.channelFactory(channelFactory)
.childHandler(new ChannelInitializerRouter(Collections.singletonList(connector)))
.childOption(ChannelOption.SO_KEEPALIVE, true);
if (config.contains("tcp_max_syn_backlog")) {
b.option(ChannelOption.SO_BACKLOG, config.getIntError("tcp_max_syn_backlog"));
}
Channel serverChannel = b.bind().sync().channel();
channels.add(serverChannel);
}
public void run() throws RuntimeException {
try {
for (Connector connector : connectors) {
run(connector);
}
for (Channel channel : channels) {
channel.closeFuture().sync();
}
} catch (Throwable exc) {
throw new RuntimeException("Failed to start web-server", exc);
} finally {
// TODO: fix this
// workerGroup.shutdownGracefully();
// bossGroup.shutdownGracefully();
}
}
}
class MyChannelFactory implements io.netty.channel.ChannelFactory<ServerChannel> {
private ServerSocketChannel channel;
public MyChannelFactory(ServerSocketChannel ch) {
this.channel = ch;
}
#Override
public ServerChannel newChannel() {
if (channel == null) {
return new NioServerSocketChannel();
} else {
return new NioServerSocketChannel(channel);
}
}
}
log:
Trying to bootstrap inherited channel: public (tcp port: 8080)
Channel localAddress(): /0:0:0:0:0:0:0:0:8080
java.lang.RuntimeException: Failed to start web-server
at MyServerBootStrap.run(MyServerBootStrap.java:85)
at MyServer.run(MyServer.java:61)
at Main.start(Main.java:96)
at Main.main(Main.java:165)
Caused by: java.nio.channels.AlreadyBoundException
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:216)
at sun.nio.ch.InheritedChannel$InheritedServerSocketChannelImpl.bind(InheritedChannel.java:92)
at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:128)
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:558)
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1338)
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:501)
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:486)
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:999)
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:254)
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:366)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Yes it should be possible.
The NioServerSocketChannel allows you to wrap an existing Channel via its constructor. So all you will need to do is to write your own ChannelFactory and use it with ServerBootstrap to ensure you create a NioServerSocketChannel that wraps it.
Another approach would be to not use ServerBootstrap at all but just call register etc with the manually created NioServerSocketChannel by yourself.
Related
I have following binding to handle UDP packets
private void doStartServer() {
final UDPPacketHandler udpPacketHandler = new UDPPacketHandler(messageDecodeHandler);
workerGroup = new NioEventLoopGroup(threadPoolSize);
try {
final Bootstrap bootstrap = new Bootstrap();
bootstrap
.group(workerGroup)
.handler(new LoggingHandler(nettyLevel))
.channel(NioDatagramChannel.class)
.option(ChannelOption.SO_BROADCAST, true)
.handler(udpPacketHandler);
bootstrap
.bind(serverIp, serverPort)
.sync()
.channel()
.closeFuture()
.await();
} finally {
stop();
}
}
and handler
#ChannelHandler.Sharable << note this
#Slf4j
#AllArgsConstructor
public class UDPPacketHandler extends SimpleChannelInboundHandler<DatagramPacket> {
private final MessageP54Handler messageP54Handler;
#Override
public void channelReadComplete(final ChannelHandlerContext ctx) {
ctx.flush();
}
#Override
public void exceptionCaught(final ChannelHandlerContext ctx, final Throwable cause) {
log.error("Exception in UDP handler", cause);
ctx.close();
}
}
At some point I get this exception java.net.SocketException: Network dropped connection on reset: no further information which is handled in exceptionCaught. This triggers ChannelHandlerContext to close. And at this point whole my server stops (executing on finally block from first snippet)
How to correctly handle exception so that I can handle new connections even after such exception occurs?
you shouldn't close the ChannelHandlerContext on an IOException when using a DatagramChannel. As DatagramChannel is "connection-less" the exception is specific to one "receive" or one "send" operation. So just log it (or whatever you want to do) and move on.
I write Spring Boot application with tcp server on Netty. Service get messages and check rows in postgres database. The problem is that at the moment of checking the records in the database, the service hangs and stops processing other messages from the tcp channel.
Configuration:
#Bean
public void start() throws InterruptedException {
log.info("Starting server at: {} ", tcpPort);
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
ServerBootstrap b = new ServerBootstrap();
b.group(workerGroup, bossGroup)
.channel(NioServerSocketChannel.class)
.childHandler(simpleTCPChannelInitializer)
.childOption(ChannelOption.SO_KEEPALIVE, true);
// Bind and start to accept incoming connections.
ChannelFuture f = b.bind(tcpPort).sync();
if(f.isSuccess())
log.info("Server started successfully");
f.channel().closeFuture().sync();
}
Channel initialization:
private final EventExecutorGroup sqlExecutorGroup = new DefaultEventExecutorGroup(16);
protected void initChannel(SocketChannel socketChannel) {
socketChannel.pipeline().addLast(new StringEncoder());
socketChannel.pipeline().addLast(new StringDecoder());
socketChannel.pipeline().addLast(sqlExecutorGroup, simpleTCPChannelHandler);
}
and method for database:
#Override
public void processMessage(String atmRequest) {
log.info("Receive tcp atmRequest: {}", atmRequest);
checkDeviceInDatabase(deviceUid);
log.info("Receive power up command");
}
private void checkDeviceInDatabase(String deviceUid) {
statusConnectRepository.findById(deviceUid).orElseThrow(()
-> new DeviceNotFoundException("DeviceUid: " + deviceUid + " was not found in database"));
}
In checkDeviceInDatabase(deviceUid) method query hangs forever.
Has anyone met such a problem?
[I am using Netty-Websokcet]
I have a use case where different service paths should be connected to the same port. I tried so many different ways, reasons I couldn't get the work done was,
In ServerBootstrap class there is only one place for ChannelHandler therefore I cannot add multiple child handlers in ServerBootstrap with different service paths
In ServerBootstrap class it is not possible to create multiple groups
This is how my init channel looks like,
#Override
protected void initChannel(SocketChannel socketChannel) throws Exception {
logger.debug(1, "Initializing the SocketChannel : {}", socketChannel.id());
socketChannel.pipeline().addLast(
new HttpRequestDecoder(),
new HttpObjectAggregator(maxPayloadSize),
new HttpResponseEncoder(),
new IdleStateHandler(0, 0, listenerConfig.getSocketTimeout(),
TimeUnit.SECONDS),
new WebSocketServerProtocolHandler(ingressConfig.getURI().getPath()), // (A)
new WebSocketServerCompressionHandler(),
new WebSocketIO(listenerConfig, manager), // a handler
new WebSocketMessageListener(messageReceiver, manager) // a handler
);
logger.debug(2, "Successfully initialized the Socket Channel : {}", socketChannel.id());
}
This code line (A) registers a handler with the given service path (service path is ingressConfig.getURI().getPath())
int maxPayloadSize = listenerConfig.getMaxPayloadSize();
try {
bossGroup = new NioEventLoopGroup(listenerConfig.getBossThreadCount());
workerGroup = new NioEventLoopGroup(listenerConfig.getWorkerThreadCount());
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new WebSocketListenerInitializer(messageReceiver, maxPayloadSize, listenerConfig,
ingressConfig))
.option(ChannelOption.SO_BACKLOG, 128)
.childOption(ChannelOption.SO_KEEPALIVE, true);
ChannelFuture channelFuture = bootstrap.bind(port);
channelFuture.sync();
channel = channelFuture.channel();
if (channelFuture.isSuccess()) {
logger.info(1, "WebSocket listener started on port : {} successfully", port);
} else {
logger.error(2, "Failed to start WebSocket server on port : {}", port,
channelFuture.cause());
throw new TransportException("Failed to start WebSocket server", channelFuture.cause());
}
} catch (InterruptedException ex) {
logger.error(1, "Interrupted Exception from : {}", WebSocketListener.class);
throw new TransportException("Interrupted Exception", ex);
}
Can anyone suggest me a way how to do this?
I have application running on Tomcat. I use Netty 4 for websocket handling.
Netty server run in ServletContextListener in contextInitialized method and stop in contextDestroyed.
This my class for Netty server:
public class WebSocketServer {
private final int port;
private final EventLoopGroup bossGroup;
private final EventLoopGroup workerGroup;
private Channel serverChannel;
public WebSocketServer(int port) {
this.port = port;
bossGroup = new NioEventLoopGroup(1);
workerGroup = new NioEventLoopGroup();
}
public void run() throws Exception {
final ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class)
.childHandler(new WebSocketServerInitializer());
serverChannel = b.bind(port).sync().channel();
System.out.println("Web socket server started at port " + port + '.');
System.out
.println("Open your browser and navigate to http://localhost:"
+ port + '/');
}
public void stop() {
if (serverChannel != null) {
ChannelFuture chFuture = serverChannel.close();
chFuture.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
shutdownWorkers();
}
});
} else {
shutdownWorkers();
}
}
private void shutdownWorkers() {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
It's work fine after running, but when I try stop Tomcat I get exception:
INFO: Illegal access: this web application instance has been stopped already. Could not load io.netty.util.concurrent.DefaultPromise$3. The eventual following stack trace is caused by an error thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access, and has no functional impact.
java.lang.IllegalStateException
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1610)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1569)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:592)
at io.netty.util.concurrent.DefaultPromise.setSuccess(DefaultPromise.java:403)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:139)
at java.lang.Thread.run(Thread.java:662)
After Tomcat hangs up.
What can be reason?
I assume you call shutdownWorkers() somewhere from Servlet.destroy() or use some other mechanism that ensures your Server goes down when servlet stops / unloads.
Then you need to do
void shutdownWorkers() {
Future fb = trbossGroup.shutdownGracefully();
Future fw = workerGroup.shutdownGracefully();
try {
fb.await();
fw.await();
} catch (InterruptedException ignore) {}
}
It is because shutdownGracefully() returns a Future, and, well, without waiting for it to come, you leave things that try to close the connections in very stressful environment. It also makes sense to first initiate all shutdown's and then wait till futures are awailable, this way it all runs in parallel and happens faster.
It fixed the issue for me. Obviously, you can make it nicer to your system without swallowing InterruptedException and wrapping each call in a nice method and putting reasonable timeout for each await(). Nice excercise in general, but in reality most probably you wouldn't care at this point in your code.
Side note: and yes, for WebSockets you will be better off with Tomcat's native, standards-compliant and robust implementation. Netty is awseome for many other things, but would be a wrong tool here.
I'm getting java.io.IOException: Connection reset by peer when I try to reuse a client connection in Netty (this does not happen if I send one request, but happens every time if I send two requests, even from a single thread). My current approach involves the following implementing a simple ChannelPool whose code is below. Note that the key method obtains a free channel from the freeChannels member or creates a new channel if none are available. The method returnChannel() is the method responsible for freeing up a channel when we are done with the request. It is called inside the pipeline after we process the response (see messageReceived() method of ResponseHandler in the code below). Does anyone see what I'm doing wrong, and why I'm getting an exception?
Channel pool code (note use of freeChannels.pollFirst() to get a free channel that has been returned via a call to returnChannel()):
public class ChannelPool {
private final ClientBootstrap cb;
private Deque<Channel> freeChannels = new ArrayDeque<Channel>();
private static Map<Channel, Channel> proxyToClient = new ConcurrentHashMap<Channel, Channel>();
public ChannelPool(InetSocketAddress address, ChannelPipelineFactory pipelineFactory) {
ChannelFactory clientFactory =
new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
cb = new ClientBootstrap(clientFactory);
cb.setPipelineFactory(pipelineFactory);
}
private void writeToNewChannel(final Object writable, Channel clientChannel) {
ChannelFuture cf;
synchronized (cb) {
cf = cb.connect(new InetSocketAddress("localhost", 18080));
}
final Channel ch = cf.getChannel();
proxyToClient.put(ch, clientChannel);
cf.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture arg0) throws Exception {
System.out.println("channel open, writing: " + ch);
ch.write(writable);
}
});
}
public void executeWrite(Object writable, Channel clientChannel) {
synchronized (freeChannels) {
while (!freeChannels.isEmpty()) {
Channel ch = freeChannels.pollFirst();
System.out.println("trying to reuse channel: " + ch + " " + ch.isOpen());
if (ch.isOpen()) {
proxyToClient.put(ch, clientChannel);
ch.write(writable).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture cf) throws Exception {
System.out.println("write from reused channel complete, success? " + cf.isSuccess());
}
});
// EDIT: I needed a return here
}
}
}
writeToNewChannel(writable, clientChannel);
}
public void returnChannel(Channel ch) {
synchronized (freeChannels) {
freeChannels.addLast(ch);
}
}
public Channel getClientChannel(Channel proxyChannel) {
return proxyToClient.get(proxyChannel);
}
}
Netty pipeline code (Note that RequestHandler calls executeWrite() which uses either a new or an old channel, and ResponseHandler calls returnChannel() after the response is received and the content is set in the response to the client):
public class NettyExample {
private static ChannelPool pool;
public static void main(String[] args) throws Exception {
pool = new ChannelPool(
new InetSocketAddress("localhost", 18080),
new ChannelPipelineFactory() {
public ChannelPipeline getPipeline() {
return Channels.pipeline(
new HttpRequestEncoder(),
new HttpResponseDecoder(),
new ResponseHandler());
}
});
ChannelFactory factory =
new NioServerSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
ServerBootstrap sb = new ServerBootstrap(factory);
sb.setPipelineFactory(new ChannelPipelineFactory() {
public ChannelPipeline getPipeline() {
return Channels.pipeline(
new HttpRequestDecoder(),
new HttpResponseEncoder(),
new RequestHandler());
}
});
sb.setOption("child.tcpNoDelay", true);
sb.setOption("child.keepAlive", true);
sb.bind(new InetSocketAddress(2080));
}
private static class ResponseHandler extends SimpleChannelHandler {
#Override
public void messageReceived(ChannelHandlerContext ctx, final MessageEvent e) {
final HttpResponse proxyResponse = (HttpResponse) e.getMessage();
final Channel proxyChannel = e.getChannel();
Channel clientChannel = pool.getClientChannel(proxyChannel);
HttpResponse clientResponse = new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK);
clientResponse.setHeader(HttpHeaders.Names.CONTENT_TYPE, "text/html; charset=UTF-8");
HttpHeaders.setContentLength(clientResponse, proxyResponse.getContent().readableBytes());
clientResponse.setContent(proxyResponse.getContent());
pool.returnChannel(proxyChannel);
clientChannel.write(clientResponse);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
e.getCause().printStackTrace();
Channel ch = e.getChannel();
ch.close();
}
}
private static class RequestHandler extends SimpleChannelHandler {
#Override
public void messageReceived(ChannelHandlerContext ctx, final MessageEvent e) {
final HttpRequest request = (HttpRequest) e.getMessage();
pool.executeWrite(request, e.getChannel());
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
e.getCause().printStackTrace();
Channel ch = e.getChannel();
ch.close();
}
}
}
EDIT: To give more detail, I've written a trace of what's happening on the proxy connection. Note that the following involves two serial requests performed by a synchronous apache commons client. The first request uses a new channel and completes fine, and the second request attempts to reuse the same channel, which is open and writable, but inexplicably fails (I've not been able to intercept any problem other than noticing the exception thrown from the worker thread). Evidently, the second request completes successfully when a retry is made. Many seconds after both requests complete, both connections finally close (i.e., even if the connection were closed by the peer, this is not reflected by any event I've intercepted):
channel open: [id: 0x6e6fbedf]
channel connect requested: [id: 0x6e6fbedf]
channel open, writing: [id: 0x6e6fbedf, /127.0.0.1:47031 => localhost/127.0.0.1:18080]
channel connected: [id: 0x6e6fbedf, /127.0.0.1:47031 => localhost/127.0.0.1:18080]
trying to reuse channel: [id: 0x6e6fbedf, /127.0.0.1:47031 => localhost/127.0.0.1:18080] true
channel open: [id: 0x3999abd1]
channel connect requested: [id: 0x3999abd1]
channel open, writing: [id: 0x3999abd1, /127.0.0.1:47032 => localhost/127.0.0.1:18080]
channel connected: [id: 0x3999abd1, /127.0.0.1:47032 => localhost/127.0.0.1:18080]
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
at sun.nio.ch.IOUtil.read(IOUtil.java:186)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:63)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:373)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:247)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Finally, figured this out. There were two issues causing a connection reset. First, I was not calling releaseConnection() from the apache commons HttpClient that was sending requests to the proxy (see the follow up question). Second, executeWrite was twice issuing the same call to the proxied server in the case the connection was reused. I needed to return after the first write rather than continuing with the while loop. The result of this double proxy call was that I was issuing duplicate responses to the original client, mangling the connection with the client.