I wonder how I could shut down a netty client
public void disconnect() {
try {
bootstrap.bind().channel().disconnect();
dataGroup.shutdownGracefully();
System.out.println(Strings.INFO_PREF + "Disconnected from server and stopped Client.");
} catch (Exception ex) {
ex.printStackTrace();
}
}
You need to hold the reference to the client Channel and EventLoopGroup during the start of the client and close it when necessary.
public void start() {
NioEventLoopGroup nioEventLoopGroup = new NioEventLoopGroup(1);
Bootstrap b = new Bootstrap();
b.group(nioEventLoopGroup)
.channel(NioSocketChannel.class)
.handler(getChannelInitializer());
this.nioEventLoopGroup = nioEventLoopGroup;
this.channel = b.connect(host, port).sync().channel();
}
//this method will return execution when client is stopped
public ChannelFuture stop() {
ChannelFuture channelFuture = channel.close().awaitUninterruptibly();
//you have to close eventLoopGroup as well
nioEventLoopGroup.shutdownGracefully();
return channelFuture;
}
Related
I have developed a java TCP client with Netty.
I should use the same local port number to connect to the server if I disconnects but I can't use same localport. I think the reason is the socket is in TIME_WAIT state after closing the connection and kernel doesn`t let it.
Is there a to use always same localport number to connect to a TCP server?
You can use .option(ChannelOption.SO_REUSEADDR, true).
Sample code:
private Bootstrap createBootstrap(ConnectionConfig config) {
final int THREAD_NUM = 1;
Bootstrap bootstrap = new Bootstrap();
EventLoopGroup group = new NioEventLoopGroup(THREAD_NUM);
bootstrap.group(group)
.channel(NioSocketChannel.class)
.option(ChannelOption.TCP_NODELAY, true)
.option(ChannelOption.SO_KEEPALIVE, true)
.option(ChannelOption.SO_REUSEADDR, true)
.handler(new ChannelInitializer<SocketChannel>() {
#Override
protected void initChannel(SocketChannel channel) throws Exception {
ChannelPipeline pipeline = channel.pipeline();
pipeline.addLast(new IdleStateHandler(config.getReaderIdleTimeMs(), config.getWriterIdleTimeMs(), 0, TimeUnit.MILLISECONDS));
pipeline.addLast(new LengthFieldBasedFrameDecoder(Integer.MAX_VALUE, PacketProtocol.Offset.LENGTH, PacketProtocol.LENGTH_LEN, 0-PacketProtocol.LENGTH_LEN, 0));
pipeline.addLast(new CodecHandler());
pipeline.addLast(new NettyChannelHandler(ConnectionImpl.this));
}
});
try {
bootstrap.bind(localPort).sync();
} catch (InterruptedException e) {
LOG.error("bootstrap bind localPort={} error", localPort, e);
throw new IllegalStateException("bootstrap bind error");
}
return bootstrap;
}
I try to create a client which will retry connect when previous connection timeout. This program tries to connect to localhost:8007 which port 8007 is without any service, so the program will retry after connection time out. But this code will free after running for a while. The program freezes when there are about 3600 threads. I expect it will continue to retry rather than it will freeze.
The standard output's last output is "retry connect begin".
Does anyone know the reason why it will freeze?
JProfiler: program's Thread statistic, shows 2 threads are blocked on java.lang.ThreadGroup:
JProfiler showing program's Thread statistic
public final class EchoClient2 {
static final boolean SSL = System.getProperty("ssl") != null;
static final String HOST = System.getProperty("host", "127.0.0.1");
static final int PORT = Integer.parseInt(System.getProperty("port", "8007"));
static final int SIZE = Integer.parseInt(System.getProperty("size", "256"));
public static void main(String[] args) throws Exception {
// Configure SSL.git
EchoClient2 echoClient2 = new EchoClient2();
echoClient2.connect();
}
public void connect() throws InterruptedException {
final SslContext sslCtx;
// Configure the client.
EventLoopGroup group = new NioEventLoopGroup();
try {
Bootstrap b = new Bootstrap();
b.group(group)
.channel(NioSocketChannel.class)
.option(ChannelOption.TCP_NODELAY, true)
.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 10)
.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
//p.addLast(new LoggingHandler(LogLevel.INFO));
p.addLast(new EchoClientHandler());
}
});
// Start the client.
ChannelFuture f = b.connect(HOST, PORT);
f.addListener(new ConnectionListener());
System.out.println("add listener");
f.sync();
System.out.println("connect sync finish");
// Wait until the connection is closed.
f.channel().closeFuture().sync();
System.out.println("channel close");
} finally {
// Shut down the event loop to terminate all threads.
//group.shutdownGracefully();
}
}
}
public class ConnectionListener implements ChannelFutureListener {
#Override
public void operationComplete(ChannelFuture channelFuture) throws Exception {
System.out.println("enter listener");
EventLoop eventLoop = channelFuture.channel().eventLoop();
eventLoop.schedule(new Runnable() {
#Override
public void run() {
try {
System.out.println("retry connect begin");
new EchoClient2().connect();
System.out.println("retry connect exit");
} catch (InterruptedException e) {
System.out.println(e);
e.printStackTrace();
}
}
}, 10, TimeUnit.MILLISECONDS);
System.out.println("exit listener");
}
}
I am using Netty 5.0.
I have a complementary client bootstrap for which I took the SecureChatClient.java example from netty github.
Wenn I send messages from the client bootstrap to the server it works perfectly fine. When I try to send messages from the server bootstrap to the client (after successfully initiating a connection/channel through the client first) I get a java.lang.UnsupportedOperationException without any further information on it. Sending messages from server to client is done via code above.
Is a serverbootstrap for receiving only?
Is a serverbootstrap not meant to be able to write messages back to the client as shown above? By that I mean, messages can enter a ChannelPipeline from a socket up through the ChannelHandlers, but only the ChannelHandlers are supposed to be writing responses back down the ChannelPipeline and out the socket. So in a ServerBootstrap a user is not meant to be able to send messages down the ChannelPipeline from outside the Pipeline. (Hope that makes sense)
Or am I simply missing something?
My code follows:
// Ports.
int serverPort = 8080;
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast("MyMessageHandler", new MyMessageHandler());
}
})
.option(ChannelOption.SO_BACKLOG, 128)
.childOption(ChannelOption.SO_KEEPALIVE, true);
// Bind and start to accept incoming connections.
ChannelFuture f = b.bind(serverPort).sync();
Channel ch = f.channel();
System.out.println("Server: Running!");
// Read commands from the stdin.
ChannelFuture lastWriteFuture = null;
BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
while(true)
{
String line = in.readLine();
if (line == null) break;
ByteBuf getOut = buffer(64);
getOut.writeBytes(line.getBytes());
// Sends the received line to the server.
lastWriteFuture = ch.writeAndFlush(getOut);
lastWriteFuture.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture cf) throws Exception {
if(cf.isSuccess()) {
System.out.println("CFListener: SUCCESS! YEAH! HELL! YEAH!");
} else {
System.out.println("CFListener: failure! FAILure! FAILURE!");
System.out.println(cf.cause());
}
}
});
}
// Wait until all messages are flushed before closing the channel.
if (lastWriteFuture != null) {
lastWriteFuture.sync();
}
// Wait until the server socket is closed.
// In this example, this does not happen, but you can do that to gracefully
// shut down your server.
f.channel().closeFuture().sync();
} catch (InterruptedException | UnsupportedOperationException e) {
e.printStackTrace();
} finally {
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
I started using the following example: https://github.com/netty/netty/tree/4.1/example/src/main/java/io/netty/example/securechat
My problem is that I get the following exception when calling ch.writeAndFlush:
java.lang.UnsupportedOperationException
at io.netty.channel.socket.nio.NioServerSocketChannel.filterOutboundMessage(NioServerSocketChannel.java:184)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:784)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1278)
at io.netty.channel.ChannelHandlerInvokerUtil.invokeWriteNow(ChannelHandlerInvokerUtil.java:158)
at io.netty.channel.DefaultChannelHandlerInvoker$WriteTask.run(DefaultChannelHandlerInvoker.java:440)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:328)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
at io.netty.util.internal.chmv8.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1412)
at io.netty.util.internal.chmv8.ForkJoinTask.doExec(ForkJoinTask.java:280)
at io.netty.util.internal.chmv8.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:877)
at io.netty.util.internal.chmv8.ForkJoinPool.scan(ForkJoinPool.java:1706)
at io.netty.util.internal.chmv8.ForkJoinPool.runWorker(ForkJoinPool.java:1661)
at io.netty.util.internal.chmv8.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:126)
You cannot write to a ServerChannel, you can only connect to normal channels. Your call to writeAndFlush is failing for this reason.
To send a message to every client, you should store the channel of every client inside a ChannelGroup and invoke writeAndFlush() on that.
A quick way to do this is adding another handler to your ServerBootstrap that puts the incoming connections inside the ChannelGroup, a quick implementation of this would be this:
// In your main:
ChannelGroup allChannels =
new DefaultChannelGroup(GlobalEventExecutor.INSTANCE);
// In your ChannelInitializer<SocketChannel>
ch.pipeline().addLast("grouper", new GlobalSendHandler());
// New class:
public class MyHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelActive(ChannelHandlerContext ctx) {
allChannels.add(ctx.channel());
super.channelActive(ctx);
}
}
Then we can call the following to send a message to every connection, this returns a ChannelGroupFuture instead of a normal ChannelFuture:
allChannels.writeAndFlush(getOut);
Your total code would look like this with the fixes from above:
// Ports.
int serverPort = 8080;
ChannelGroup allChannels =
new DefaultChannelGroup(GlobalEventExecutor.INSTANCE);
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast("MyMessageHandler", new MyMessageHandler());
ch.pipeline().addLast("grouper", new GlobalSendHandler());
}
})
.option(ChannelOption.SO_BACKLOG, 128)
.childOption(ChannelOption.SO_KEEPALIVE, true);
// Bind and start to accept incoming connections.
ChannelFuture f = b.bind(serverPort).sync();
Channel ch = f.channel();
System.out.println("Server: Running!");
// Read commands from the stdin.
ChannelGroupFuture lastWriteFuture = null;
BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
while(true)
{
String line = in.readLine();
if (line == null) break;
ByteBuf getOut = buffer(64);
getOut.writeBytes(line.getBytes());
// Sends the received line to the server.
lastWriteFuture = allChannels.writeAndFlush(getOut);
lastWriteFuture.addListener(new ChannelGroupFutureListener() {
#Override
public void operationComplete(ChannelGroupFuture cf) throws Exception {
if(cf.isSuccess()) {
System.out.println("CFListener: SUCCESS! YEAH! HELL! YEAH!");
} else {
System.out.println("CFListener: failure! FAILure! FAILURE!");
System.out.println(cf.cause());
}
}
});
}
// Wait until all messages are flushed before closing the channel.
if (lastWriteFuture != null) {
lastWriteFuture.sync();
}
// Wait until the server socket is closed.
// In this example, this does not happen, but you can do that to gracefully
// shut down your server.
f.channel().closeFuture().sync();
} catch (InterruptedException | UnsupportedOperationException e) {
e.printStackTrace();
} finally {
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
I think Netty Server has no decoder, encoder.
if you want to send String data,
serverBootstrap.group(bossGroup, workerGroup).childHandler(new ChannelInitializer<SocketChannel>() {
#Override
protected void initChannel(SocketChannel channel) throws Exception {
ChannelPipeline channelPipeline = channel.pipeline();
channelPipeline.addLast("String Encoder", new StringEncoder(CharsetUtil.UTF_8));
channelPipeline.addLast("String Decoder", new StringDecoder(CharsetUtil.UTF_8));
}
});
Add your server's Initializer!
Here is my client side source code:
public class Client
{
Server server;
Logger logger;
ChannelHandlerContext responseCtx;
public Client(String host, int port, int mode, String fileName)
{
server=null;
EventLoopGroup group = new NioEventLoopGroup();
try
{
Bootstrap b = new Bootstrap();
b.group(group);
b.channel(NioSocketChannel.class);
b.remoteAddress(new InetSocketAddress(host, port));
b.handler(new MyChannelInitializer(server, mode,fileName));
ChannelFuture f = b.connect().sync();
f.channel().closeFuture().sync();
System.out.println("client started");
}
catch (InterruptedException e)
{
e.printStackTrace();
}
finally
{
try {
group.shutdownGracefully().sync();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
Calendar endTime=Calendar.getInstance();
System.out.println("client stopped "+endTime.getTimeInMillis());
}
}
public static void main(String[] args) throws Exception
{
#SuppressWarnings("unused")
Client c=new Client("localhost",1234,MyFtpServer.SENDFILE,"D:\\SITO3\\Documents\\Xmas-20141224-310.jpg");
}
}
Here is my File Transfer Complete Listener source code:
public class FileTransferCompleteListener implements ChannelFutureListener
{
Server server;
public FileTransferCompleteListener(Server server)
{
this.server=server;
}
#Override
public void operationComplete(ChannelFuture cf) throws Exception
{
Calendar endTime=Calendar.getInstance();
System.out.println("File transfer completed! "+endTime.getTimeInMillis());
if(server!=null)
server.stop();
else
cf.channel().close();
}
}
Here is the execution result:
File transfer completed! 1451446521041
client started
client stopped 1451446523244
I want to know why it takes about 2 second to close connection.
Is it any way to reduce it?
thank you very much
Take a look at shutdownGracefully(long quietPeriod, long timeout, TimeUnit unit), you can specify the quietPeriod, by default it is a couple of seconds.
I’m not sure what the implications are if you shorten this.
I'm using netty 4.0.24.Final.
I need to start/stop netty server programmatically.
On starting the server, the thread gets blocked at
f.channel().closeFuture().sync()
Please help with some hints how to do it correctly.
Below is the EchoServer that is called by the Main class.
Thanks.
package nettytests;
import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.handler.logging.LogLevel;
import io.netty.handler.logging.LoggingHandler;
public class EchoServer {
private final int PORT = 8007;
private EventLoopGroup bossGroup;
private EventLoopGroup workerGroup;
public void start() throws Exception {
// Configure the server.
bossGroup = new NioEventLoopGroup(1);
workerGroup = new NioEventLoopGroup(1);
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.option(ChannelOption.SO_BACKLOG, 100)
.handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new EchoServerHandler());
}
});
// Start the server.
ChannelFuture f = b.bind(PORT).sync();
// Wait until the server socket is closed. Thread gets blocked.
f.channel().closeFuture().sync();
} finally {
// Shut down all event loops to terminate all threads.
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
public void stop(){
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
package nettytests;
public class Main {
public static void main(String[] args) throws Exception {
EchoServer server = new EchoServer();
// start server
server.start();
// not called, because the thread is blocked above
server.stop();
}
}
UPDATE:
I changed the EchoServer class in the following way. The idea is to start the server in a new thread and preserve the links to the EventLoopGroups.
Is this the right way?
package nettytests;
import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.handler.logging.LogLevel;
import io.netty.handler.logging.LoggingHandler;
/**
* Echoes back any received data from a client.
*/
public class EchoServer {
private final int PORT = 8007;
private EventLoopGroup bossGroup;
private EventLoopGroup workerGroup;
public void start() throws Exception {
new Thread(() -> {
// Configure the server.
bossGroup = new NioEventLoopGroup(1);
workerGroup = new NioEventLoopGroup(1);
Thread.currentThread().setName("ServerThread");
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.option(ChannelOption.SO_BACKLOG, 100)
.handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new EchoServerHandler());
}
});
// Start the server.
ChannelFuture f = b.bind(PORT).sync();
// Wait until the server socket is closed.
f.channel().closeFuture().sync();
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
// Shut down all event loops to terminate all threads.
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}).start();
}
public void stop() throws InterruptedException {
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
One way is to make something like:
// once having an event in your handler (EchoServerHandler)
// Close the current channel
ctx.channel().close();
// Then close the parent channel (the one attached to the bind)
ctx.channel().parent().close();
Doing this way will end up the following:
// Wait until the server socket is closed. Thread gets blocked.
f.channel().closeFuture().sync();
No need for an extra thread on main part.
Now the question is: what kind of event? It's up to you... Might be a message in the echo handler as "shutdown" that will be taken as an order of shutdown and not only "quit" which will turn as closing only the client channel. Might be something else...
If you do not handle the shutdown from a child channel (so through your handler) but through another process (for instance looking for a stop file existing), then you need an extra thread that will wait for this event and then directly make a channel.close() where channel will be the parent one (from f.channel()) for instance...
Many other solutions exist.
I was following official tutorials and encountered the same problem. The tutorials
have the same pattern:
f.channel().closeFuture().sync();
...
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
That is, the channel is being closed before the groups shutdown. I changed the order to this:
bossGroup.shutdownGracefully().sync();
workerGroup.shutdownGracefully().sync();
f.channel().closeFuture().sync();
And it worked. This results to a rough modified example of a server which doesn't lock:
class Server
{
private ChannelFuture future;
private NioEventLoopGroup masterGroup;
private NioEventLoopGroup workerGroup;
Server(int networkPort)
{
masterGroup = new NioEventLoopGroup();
workerGroup = new NioEventLoopGroup();
try
{
ServerBootstrap serverBootstrap = new ServerBootstrap();
serverBootstrap.group(masterGroup, workerGroup);
serverBootstrap.channel(NioServerSocketChannel.class);
serverBootstrap.option(ChannelOption.SO_BACKLOG,128);
serverBootstrap.childOption(ChannelOption.SO_KEEPALIVE,true);
serverBootstrap.childHandler(new ChannelInitializer<SocketChannel>()
{
#Override
protected void initChannel(SocketChannel ch)
{
ch.pipeline().addLast(new InboundHandler());
}
}).validate();
future = serverBootstrap.bind(networkPort).sync();
System.out.println("Started server on "+networkPort);
}
catch (Exception e)
{
e.printStackTrace();
shutdown();
}
}
void shutdown()
{
System.out.println("Stopping server");
try
{
masterGroup.shutdownGracefully().sync();
workerGroup.shutdownGracefully().sync();
future.channel().closeFuture().sync();
System.out.println("Server stopped");
}
catch (InterruptedException e)
{
e.printStackTrace();
}
}
}
I just shut down the eventloopgroups
bossGroup.shutdownGracefully().sync();
workerGroup.shutdownGracefully().sync();
and it works great because when I send request to my proxy server with retrofit it says that "fails to connect".