UnsupportedOperationException when sending messages through ServerBootstrap ChannelPipeline in Netty - java

I am using Netty 5.0.
I have a complementary client bootstrap for which I took the SecureChatClient.java example from netty github.
Wenn I send messages from the client bootstrap to the server it works perfectly fine. When I try to send messages from the server bootstrap to the client (after successfully initiating a connection/channel through the client first) I get a java.lang.UnsupportedOperationException without any further information on it. Sending messages from server to client is done via code above.
Is a serverbootstrap for receiving only?
Is a serverbootstrap not meant to be able to write messages back to the client as shown above? By that I mean, messages can enter a ChannelPipeline from a socket up through the ChannelHandlers, but only the ChannelHandlers are supposed to be writing responses back down the ChannelPipeline and out the socket. So in a ServerBootstrap a user is not meant to be able to send messages down the ChannelPipeline from outside the Pipeline. (Hope that makes sense)
Or am I simply missing something?
My code follows:
// Ports.
int serverPort = 8080;
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast("MyMessageHandler", new MyMessageHandler());
}
})
.option(ChannelOption.SO_BACKLOG, 128)
.childOption(ChannelOption.SO_KEEPALIVE, true);
// Bind and start to accept incoming connections.
ChannelFuture f = b.bind(serverPort).sync();
Channel ch = f.channel();
System.out.println("Server: Running!");
// Read commands from the stdin.
ChannelFuture lastWriteFuture = null;
BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
while(true)
{
String line = in.readLine();
if (line == null) break;
ByteBuf getOut = buffer(64);
getOut.writeBytes(line.getBytes());
// Sends the received line to the server.
lastWriteFuture = ch.writeAndFlush(getOut);
lastWriteFuture.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture cf) throws Exception {
if(cf.isSuccess()) {
System.out.println("CFListener: SUCCESS! YEAH! HELL! YEAH!");
} else {
System.out.println("CFListener: failure! FAILure! FAILURE!");
System.out.println(cf.cause());
}
}
});
}
// Wait until all messages are flushed before closing the channel.
if (lastWriteFuture != null) {
lastWriteFuture.sync();
}
// Wait until the server socket is closed.
// In this example, this does not happen, but you can do that to gracefully
// shut down your server.
f.channel().closeFuture().sync();
} catch (InterruptedException | UnsupportedOperationException e) {
e.printStackTrace();
} finally {
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
I started using the following example: https://github.com/netty/netty/tree/4.1/example/src/main/java/io/netty/example/securechat
My problem is that I get the following exception when calling ch.writeAndFlush:
java.lang.UnsupportedOperationException
at io.netty.channel.socket.nio.NioServerSocketChannel.filterOutboundMessage(NioServerSocketChannel.java:184)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:784)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1278)
at io.netty.channel.ChannelHandlerInvokerUtil.invokeWriteNow(ChannelHandlerInvokerUtil.java:158)
at io.netty.channel.DefaultChannelHandlerInvoker$WriteTask.run(DefaultChannelHandlerInvoker.java:440)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:328)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
at io.netty.util.internal.chmv8.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1412)
at io.netty.util.internal.chmv8.ForkJoinTask.doExec(ForkJoinTask.java:280)
at io.netty.util.internal.chmv8.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:877)
at io.netty.util.internal.chmv8.ForkJoinPool.scan(ForkJoinPool.java:1706)
at io.netty.util.internal.chmv8.ForkJoinPool.runWorker(ForkJoinPool.java:1661)
at io.netty.util.internal.chmv8.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:126)

You cannot write to a ServerChannel, you can only connect to normal channels. Your call to writeAndFlush is failing for this reason.
To send a message to every client, you should store the channel of every client inside a ChannelGroup and invoke writeAndFlush() on that.
A quick way to do this is adding another handler to your ServerBootstrap that puts the incoming connections inside the ChannelGroup, a quick implementation of this would be this:
// In your main:
ChannelGroup allChannels =
new DefaultChannelGroup(GlobalEventExecutor.INSTANCE);
// In your ChannelInitializer<SocketChannel>
ch.pipeline().addLast("grouper", new GlobalSendHandler());
// New class:
public class MyHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelActive(ChannelHandlerContext ctx) {
allChannels.add(ctx.channel());
super.channelActive(ctx);
}
}
Then we can call the following to send a message to every connection, this returns a ChannelGroupFuture instead of a normal ChannelFuture:
allChannels.writeAndFlush(getOut);
Your total code would look like this with the fixes from above:
// Ports.
int serverPort = 8080;
ChannelGroup allChannels =
new DefaultChannelGroup(GlobalEventExecutor.INSTANCE);
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast("MyMessageHandler", new MyMessageHandler());
ch.pipeline().addLast("grouper", new GlobalSendHandler());
}
})
.option(ChannelOption.SO_BACKLOG, 128)
.childOption(ChannelOption.SO_KEEPALIVE, true);
// Bind and start to accept incoming connections.
ChannelFuture f = b.bind(serverPort).sync();
Channel ch = f.channel();
System.out.println("Server: Running!");
// Read commands from the stdin.
ChannelGroupFuture lastWriteFuture = null;
BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
while(true)
{
String line = in.readLine();
if (line == null) break;
ByteBuf getOut = buffer(64);
getOut.writeBytes(line.getBytes());
// Sends the received line to the server.
lastWriteFuture = allChannels.writeAndFlush(getOut);
lastWriteFuture.addListener(new ChannelGroupFutureListener() {
#Override
public void operationComplete(ChannelGroupFuture cf) throws Exception {
if(cf.isSuccess()) {
System.out.println("CFListener: SUCCESS! YEAH! HELL! YEAH!");
} else {
System.out.println("CFListener: failure! FAILure! FAILURE!");
System.out.println(cf.cause());
}
}
});
}
// Wait until all messages are flushed before closing the channel.
if (lastWriteFuture != null) {
lastWriteFuture.sync();
}
// Wait until the server socket is closed.
// In this example, this does not happen, but you can do that to gracefully
// shut down your server.
f.channel().closeFuture().sync();
} catch (InterruptedException | UnsupportedOperationException e) {
e.printStackTrace();
} finally {
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}

I think Netty Server has no decoder, encoder.
if you want to send String data,
serverBootstrap.group(bossGroup, workerGroup).childHandler(new ChannelInitializer<SocketChannel>() {
#Override
protected void initChannel(SocketChannel channel) throws Exception {
ChannelPipeline channelPipeline = channel.pipeline();
channelPipeline.addLast("String Encoder", new StringEncoder(CharsetUtil.UTF_8));
channelPipeline.addLast("String Decoder", new StringDecoder(CharsetUtil.UTF_8));
}
});
Add your server's Initializer!

Related

Re-use same local port number with Netty client

I have developed a java TCP client with Netty.
I should use the same local port number to connect to the server if I disconnects but I can't use same localport. I think the reason is the socket is in TIME_WAIT state after closing the connection and kernel doesn`t let it.
Is there a to use always same localport number to connect to a TCP server?
You can use .option(ChannelOption.SO_REUSEADDR, true).
Sample code:
private Bootstrap createBootstrap(ConnectionConfig config) {
final int THREAD_NUM = 1;
Bootstrap bootstrap = new Bootstrap();
EventLoopGroup group = new NioEventLoopGroup(THREAD_NUM);
bootstrap.group(group)
.channel(NioSocketChannel.class)
.option(ChannelOption.TCP_NODELAY, true)
.option(ChannelOption.SO_KEEPALIVE, true)
.option(ChannelOption.SO_REUSEADDR, true)
.handler(new ChannelInitializer<SocketChannel>() {
#Override
protected void initChannel(SocketChannel channel) throws Exception {
ChannelPipeline pipeline = channel.pipeline();
pipeline.addLast(new IdleStateHandler(config.getReaderIdleTimeMs(), config.getWriterIdleTimeMs(), 0, TimeUnit.MILLISECONDS));
pipeline.addLast(new LengthFieldBasedFrameDecoder(Integer.MAX_VALUE, PacketProtocol.Offset.LENGTH, PacketProtocol.LENGTH_LEN, 0-PacketProtocol.LENGTH_LEN, 0));
pipeline.addLast(new CodecHandler());
pipeline.addLast(new NettyChannelHandler(ConnectionImpl.this));
}
});
try {
bootstrap.bind(localPort).sync();
} catch (InterruptedException e) {
LOG.error("bootstrap bind localPort={} error", localPort, e);
throw new IllegalStateException("bootstrap bind error");
}
return bootstrap;
}

How could I shutdown a netty client?

I wonder how I could shut down a netty client
public void disconnect() {
try {
bootstrap.bind().channel().disconnect();
dataGroup.shutdownGracefully();
System.out.println(Strings.INFO_PREF + "Disconnected from server and stopped Client.");
} catch (Exception ex) {
ex.printStackTrace();
}
}
You need to hold the reference to the client Channel and EventLoopGroup during the start of the client and close it when necessary.
public void start() {
NioEventLoopGroup nioEventLoopGroup = new NioEventLoopGroup(1);
Bootstrap b = new Bootstrap();
b.group(nioEventLoopGroup)
.channel(NioSocketChannel.class)
.handler(getChannelInitializer());
this.nioEventLoopGroup = nioEventLoopGroup;
this.channel = b.connect(host, port).sync().channel();
}
//this method will return execution when client is stopped
public ChannelFuture stop() {
ChannelFuture channelFuture = channel.close().awaitUninterruptibly();
//you have to close eventLoopGroup as well
nioEventLoopGroup.shutdownGracefully();
return channelFuture;
}

Getting the main thread back in Netty ServerSocket

I had a question about how I could get the master thread back in Netty while creating a TCP server socket.
In the code below taken from here, "Hello Hello" would never be written in the output as the thread that starts the Server, waits on this line: f.channel().closeFuture().sync();. Do I need to create a separate thread to get the main thread back in this case or is there any way in Netty that would allow me to do so (getting the main thread back while having the TCP running in the background)?
public void start() throws Exception {
NioEventLoopGroup group = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(group)
.channel(NioServerSocketChannel.class)
.localAddress(new InetSocketAddress(port))
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch)
throws Exception {
ch.pipeline().addLast(
new EchoServerHandler());
}
});
ChannelFuture f = b.bind().sync();
System.out.println(EchoServer.class.getName() + " started and listen on " + f.channel().localAddress());
f.channel().closeFuture().sync();
} finally {
group.shutdownGracefully().sync();
}
}
public static void main(String[] args) throws Exception {
if (args.length != 1) {
System.err.println(
"Usage: " + EchoServer.class.getSimpleName() +
" <port>");
return;
}
int port = Integer.parseInt(args[0]);
new EchoServer(port).start();
System.out.println("Hello Hello");
}
You are not required to wait for the closefuture. This is only done in the tutorials to make the the event loop group is properly closed.
You can remove the f.channel().closeFuture().sync(); and the group.shutdownGracefully().sync(); line from your program to make it non-blocking.
Make sure to call f.channel().close(), then f.channel().closeFuture().sync() and finally group.shutdownGracefully().sync(); when shutting down your main program to make sure the Netty stack is properly stopped

How to use multiple ServerBootstrap objects in Netty

I am trying to use Netty (4.0.24) to create several servers (several ServerBootstraps) in one application (one main method). I saw this question/answer but it leaves many questions unanswered:
Netty 4.0 multi port with difference protocol each port
So here are my questions:
The above answer suggests that all we need to do is create multiple ServerBootstrap objects and bind() to each. But most of the code examples I see for a single ServerBootstrap will then call something like this:
try {
b.bind().sync().channel().closeFuture().sync();
}
finally {
b.shutdown();
}
So doesn't the sync() call result in the ServerBootstrap b blocking? So how can we do this for multiple ServerBootstraps? And what happens if we do not call sync()? Is the set of sync calls only for being able to gracefully shutdown the server with b.shutdown()? If so, is there any way to gracefully shutdown multiple ServerBootstraps?
Also, I don't understand what happens when we just call bind() without calling sync(). Does the server somehow keep running? How do we shut it down gracefully?
Obviously I'm pretty confused about how all this works, and sadly Netty documentation is really lacking in this regard. Any help would be greatly appreciated.
Following example you referenced and adding your question on sync() method, here an example code:
EventLoopGroup bossGroup = new NioEventLoopGroup(numBossThreads);
EventLoopGroup workerGroup = new NioEventLoopGroup(numWorkerThreads);
ServerBootstrap sb1 = null;
ServerBootstrap sb2 = null;
ServerBootstrap sb3 = null;
Channel ch1 = null;
Channel ch2 = null;
Channel ch3 = null;
try {
sb1 = new ServerBootstrap();
sb1.group(bossGroup, workerGroup);
...
ch1 = sb1.bind().sync().channel();
sb2 = new ServerBootstrap();
sb2.group(bossGroup, workerGroup);
...
ch2 = sb2.bind().sync().channel();
sb3 = new ServerBootstrap();
sb3.group(bossGroup, workerGroup);
...
ch3 = sb3.bind().sync().channel();
} finally {
// Now waiting for the parent channels (the binded ones) to be closed
if (ch1 != null) {
ch1.closeFuture().sync();
}
if (b1 != null) {
b1.shutdownGracefully();
}
if (ch2 != null) {
ch2.closeFuture().sync();
}
if (b2 != null) {
b2.shutdownGracefully();
}
if (ch3 != null) {
ch3.closeFuture().sync();
}
if (b3 != null) {
b3.shutdownGracefully();
}
So now on the explanations (I try):
The bind() command creates the listening corresponding socket. It returns immediately (not blocking) so the parent channel might not be yet available.
The first sync() command (bind().sync()) waits for the binding to be done (if an exception is raised, then it goes directly to the finally part). At this stage, the channel is ready and listening for new connections for sure.
The channel() command gets this listening channel (the parent one, connected to no one yet). All clients will generate a "child" channel of this parent one.
In you handler, after some event, you decide to close the parent channel (not the child one, but the one listening and waiting for new socket). To do this close, just call parentChannel.close() (or from a child channel child.parent().close()).
The closeFuture() command is getting the future on this closing event.
When this future is over (done), this is when the last sync() command (closeFuture().sync()) will take place.
Once the parent channel is closed, you can ask for a gracefully shutdown of the binded channel.
So doing this way (waiting for the closeFuture then shutdownGracefully) is a clean way to shutdown all resources attached to this ServerBootstrap.
Of course you can change a bit the things. For instance, not getting the channel first but only later when you want to block before gracefully shutdown.
EventLoopGroup bossGroup = new NioEventLoopGroup(numBossThreads);
EventLoopGroup workerGroup = new NioEventLoopGroup(numWorkerThreads);
ServerBootstrap sb1 = null;
ServerBootstrap sb2 = null;
ServerBootstrap sb3 = null;
ChannelFuture cf1 = null;
ChannelFuture cf2 = null;
ChannelFuture cf3 = null;
try {
sb1 = new ServerBootstrap();
sb1.group(bossGroup, workerGroup);
...
cf1 = sb1.bind();
sb2 = new ServerBootstrap();
sb2.group(bossGroup, workerGroup);
...
cf2 = sb2.bind();
sb3 = new ServerBootstrap();
sb3.group(bossGroup, workerGroup);
...
cf3 = sb3.bind();
} finally {
// Now waiting for the parent channels (the binded ones) to be closed
if (cf1 != null) {
cf1.sync().channel().closeFuture().sync();
}
if (cf2 != null) {
c2.sync().channel().closeFuture().sync();
}
if (cf3 != null) {
cf3.sync().channel().closeFuture().sync();
}
if (b1 != null) {
b1.shutdownGracefully();
}
if (b2 != null) {
b2.shutdownGracefully();
}
if (b3 != null) {
b3.shutdownGracefully();
}
This way you don't block at all while opening all 3 channels, but then wait for all 3 to be done before shutdown them.
Finally, if you don't block on bind() event then on closeFuture() event, it's up to you to define how you will wait after the sbx.bind() commands and before to shutdown the ServerBootstraps.
public static void main(String[] args) {
new Thread(new Runnable(){
#Override
public void run() {
//{...} ServerBootstrap 1
}
}).start();
new Thread(new Runnable(){
#Override
public void run() {
//{...} ServerBootstrap 2
}
}).start();
new Thread(new Runnable(){
#Override
public void run() {
//{...} ServerBootstrap 3
}
}).start();
}

Channel closed after first write

I wrote a client class that handles multiple TCP connections to different TCP servers as follows:
private int nThreads;
private Charset charset;
private Bootstrap bootstrap;
private Map<String, Channel> channels = new HashMap<String, Channel>();
public MyClass() {
bootstrap = new Bootstrap()
.group(new NioEventLoopGroup(nThreads))
.channel(NioSocketChannel.class)
.option(ChannelOption.SO_KEEPALIVE, true)
.handler(new ChannelInitializer<SocketChannel>() {
#Override
protected void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new StringEncoder(charset));
}
});
}
public void send(MyObject myObject) {
final String socket = myobject.getSocket();
//Check if a channel already exists for this socket
Channel channel = channels.get(socket);
if(channel == null) {
/* No channel found for this socket. */
//Extract host and port from socket
String[] hostport = socket.split(":", 2);
int port = Integer.parseInt(hostport[1]);
//Create new channel
ChannelFuture connectionFuture;
try {
connectionFuture = bootstrap.connect(hostport[0], port).await();
} catch (InterruptedException e) {
return;
}
//Connection operation is completed, check status
if(!connectionFuture.isSuccess()) {
return;
}
//Add channel to the map
channel = connectionFuture.channel();
channels.put(notifSocket, channel);
}
//Write message on channel
final String message = myObject.getMessage();
channel.writeAndFlush(message).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if(!future.isSuccess()) {
//Log cause
return;
}
}
});
}
}
When the send() method is called for the first time for a given socket, a connection is established to the remote server and the message is correctly sent. However, when the send() method is called a second time for the same socket, the Channel is found in the map but the writeAndFlush() operation fails and the cause indicates the channel is closed.
I don't see anywhere in my code where I close this Channel. Is there a special configuration to avoid Netty closing the Channel?
Thanks,
Mickael
The remote-host may closed the channel as it idled to long. You will need to implement some kind of heartbeat to make sure the channel stays active.

Categories