I develop a netty http server, but when I write the response in the method ChannelInboundHandlerAdapter.channelRead0, my response result comes from another server and the size of the result is unknown, so its http response headers maybe has content-length or chunked. so I use a buffer, if it's enough (read up full data) regardless of content-length or chunked, I use content-length, otherwise I use chunked.
How I hold the write channel of first connection then pass it to the seconde Handler inorder to write the response. ( I just directly pass ctx to write but nothing returns)
How I conditionally decide write chunked data to channel or normal data with content-length (it seems not to work to add ChunkWriteHandler if chunk is needed when channelRead0.
take a simple code for example:
```java
EventLoopGroup bossGroup = new NioEventLoopGroup();
final EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap serverBootstrap = new ServerBootstrap();
serverBootstrap.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class)
.handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new ChannelInitializer<Channel>(){
#Override
protected void initChannel(Channel ch) throws Exception
{
System.out.println("Start, I accept client");
ChannelPipeline pipeline = ch.pipeline();
// Uncomment the following line if you want HTTPS
// SSLEngine engine =
// SecureChatSslContextFactory.getServerContext().createSSLEngine();
// engine.setUseClientMode(false);
// pipeline.addLast("ssl", new SslHandler(engine));
pipeline.addLast("decoder", new HttpRequestDecoder());
// Uncomment the following line if you don't want to handle HttpChunks.
// pipeline.addLast("aggregator", new HttpChunkAggregator(1048576));
pipeline.addLast("encoder", new HttpResponseEncoder());
// Remove the following line if you don't want automatic content
// compression.
//pipeline.addLast("aggregator", new HttpChunkAggregator(1048576));
pipeline.addLast("chunkedWriter", new ChunkedWriteHandler());
pipeline.addLast("deflater", new HttpContentCompressor());
pipeline.addLast("handler", new SimpleChannelInboundHandler<HttpObject>(){
#Override
protected void channelRead0(ChannelHandlerContext ctx, HttpObject msg) throws Exception
{
System.out.println("msg=" + msg);
final ChannelHandlerContext ctxClient2Me = ctx;
// TODO: Implement this method
Bootstrap bs = new Bootstrap();
try{
//bs.resolver(new DnsAddressResolverGroup(NioDatagramChannel.class, DefaultDnsServerAddressStreamProvider.INSTANCE));
//.option(ChannelOption.TCP_NODELAY, java.lang.Boolean.TRUE)
bs.resolver(DefaultAddressResolverGroup.INSTANCE);
}catch(Exception e){
e.printStackTrace();
}
bs.channel(NioSocketChannel.class);
EventLoopGroup cg = workerGroup;//new NioEventLoopGroup();
bs.group(cg).handler(new ChannelInitializer<Channel>(){
#Override
protected void initChannel(Channel ch) throws Exception
{
System.out.println("start, server accept me");
// TODO: Implement this method
ch.pipeline().addLast("http-request-encode", new HttpRequestEncoder());
ch.pipeline().addLast(new HttpResponseDecoder());
ch.pipeline().addLast("http-res", new SimpleChannelInboundHandler<HttpObject>(){
#Override
protected void channelRead0(ChannelHandlerContext ctx, HttpObject msg) throws Exception
{
// TODO: Implement this method
System.out.println("target = " + msg);
//
if(msg instanceof HttpResponse){
HttpResponse res = (HttpResponse) msg;
HttpUtil.isTransferEncodingChunked(res);
DefaultHttpResponse resClient2Me = new DefaultHttpResponse(HttpVersion.HTTP_1_1, res.getStatus());
//resClient2Me.headers().set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.CHUNKED);
//resClient2Me.headers().set(HttpHeaderNames.CONTENT_LENGTH, "");
ctxClient2Me.write(resClient2Me);
}
if(msg instanceof LastHttpContent){
// now response the request of the client, it wastes x seconds from receiving request to response
ctxClient2Me.writeAndFlush(LastHttpContent.EMPTY_LAST_CONTENT).addListener(ChannelFutureListener.CLOSE);
ctx.close();
}else if( msg instanceof HttpContent){
//ctxClient2Me.write(new DefaultHttpContent(msg)); write chunk by chunk ?
}
}
});
System.out.println("end, server accept me");
}
});
final URI uri = new URI("http://example.com/");
String host = uri.getHost();
ChannelFuture connectFuture= bs.connect(host, 80);
System.out.println("to connect me to server");
connectFuture.addListener(new ChannelFutureListener(){
#Override
public void operationComplete(ChannelFuture cf) throws Exception
{
}
});
ChannelFuture connetedFuture = connectFuture.sync(); // TODO optimize, wait io
System.out.println("connected me to server");
DefaultFullHttpRequest req = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, uri.getRawPath());
//req.headers().set(HttpHeaderNames.HOST, "");
connetedFuture.channel().writeAndFlush(req);
System.out.println("end of Client2Me channelRead0");
System.out.println("For the seponse of Me2Server, see SimpleChannelInboundHandler.channelRead0");
}
});
System.out.println("end, I accept client");
}
});
System.out.println("========");
ChannelFuture channelFuture = serverBootstrap.bind(2080).sync();
channelFuture.channel().closeFuture().sync();
} finally {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
```
After a bit of struggle trying to send response from non-Netty eventloop thread, I finally figured out the problem. If your client is closing the outputstream using
socketChannel.shutdownOutput()
then you need to set ALLOW_HALF_CLOSURE property true in Netty so it won't close the channel.
Here's a sample server. The client is left as an exercise to the reader :-)
final ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.option(ChannelOption.SO_KEEPALIVE, true)
.option(ChannelOption.ALLOW_HALF_CLOSURE, true) // This option doesn't work
.handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new ChannelInitializer<io.netty.channel.socket.SocketChannel>() {
#Override
protected void initChannel(io.netty.channel.socket.SocketChannel ch) throws Exception {
ch.pipeline().addLast(new ChannelInboundHandlerAdapter() {
#Override
public void channelRegistered(ChannelHandlerContext ctx) throws Exception {
ctx.channel().config().setOption(ChannelOption.ALLOW_HALF_CLOSURE, true); // This is important
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ByteBuffer byteBuffer = ((ByteBuf) msg).nioBuffer();
String id = ctx.channel().id().asLongText();
// When Done reading all the bytes, send response 1 second later
timer.schedule(new TimerTask() {
#Override
public void run() {
ctx.write(Unpooled.copiedBuffer(CONTENT.asReadOnlyBuffer()));
ctx.flush();
ctx.close();
log.info("[{}] Server time to first response byte: {}", id, System.currentTimeMillis() - startTimes.get(id));
startTimes.remove(id);
}
}, 1000);
}
}
}
});
Channel ch = b.bind("localhost", PORT).sync().channel();
ch.closeFuture().sync();
Ofcourse, as mentioned by others in the thread, you cannot send Strings, you need to send a ByteBuf using Unpooled.copiedBuffer
See the comments about Channel, so you can reserve the Channel received in ChannelInboundHandlerAdapter.channelRead(ChannelHandlerContext ctx, Object msg) (msg is not released after returning automatically) or SimpleChannelInboundHandler.channelRead0(ChannelHandlerContext ctx, I msg) (it releases the received messages automatically after returning) for later use. Maybe you can refer to the example at the end, pass the channel to another ChannelHandler.
All I/O operations are asynchronous.
All I/O operations in Netty are asynchronous. It means any I/O calls will return immediately with no guarantee that the requested I/O operation has been completed at the end of the call. Instead, you will be returned with a ChannelFuture instance which will notify you when the requested I/O operation has succeeded, failed, or canceled.
public interface Channel extends AttributeMap, Comparable<Channel> {
/**
* Request to write a message via this {#link Channel} through the {#link ChannelPipeline}.
* This method will not request to actual flush, so be sure to call {#link #flush()}
* once you want to request to flush all pending data to the actual transport.
*/
ChannelFuture write(Object msg);
/**
* Request to write a message via this {#link Channel} through the {#link ChannelPipeline}.
* This method will not request to actual flush, so be sure to call {#link #flush()}
* once you want to request to flush all pending data to the actual transport.
*/
ChannelFuture write(Object msg, ChannelPromise promise);
/**
* Request to flush all pending messages.
*/
Channel flush();
/**
* Shortcut for call {#link #write(Object, ChannelPromise)} and {#link #flush()}.
*/
ChannelFuture writeAndFlush(Object msg, ChannelPromise promise);
/**
* Shortcut for call {#link #write(Object)} and {#link #flush()}.
*/
ChannelFuture writeAndFlush(Object msg);
}
There is no need to worry about this if you has added HttpResponseEncoder (it is a subclass of HttpObjectEncoder, which has a private filed private int state = ST_INIT; to remember whether to encode HTTP body data as chunked) into ChannelPipeline, the only thing to do is add a header 'transfer-encoding: chunked', e.g. HttpUtil.setTransferEncodingChunked(srcRes, true);.
```java
public class NettyToServerChat extends SimpleChannelInboundHandler<HttpObject> {
private static final Logger LOGGER = LoggerFactory.getLogger(NettyToServerChat.class);
public static final String CHANNEL_NAME = "NettyToServer";
protected final ChannelHandlerContext ctxClientToNetty;
/** Determines if the response supports keepalive */
private boolean responseKeepalive = true;
/** Determines if the response is chunked */
private boolean responseChunked = false;
public NettyToServerChat(ChannelHandlerContext ctxClientToNetty) {
this.ctxClientToNetty = ctxClientToNetty;
}
#Override
protected void channelRead0(ChannelHandlerContext ctx, HttpObject msg) throws Exception {
if (msg instanceof HttpResponse) {
HttpResponse response = (HttpResponse) msg;
HttpResponseStatus resStatus = response.status();
//LOGGER.info("Status Line: {} {} {}", response.getProtocolVersion(), resStatus.code(), resStatus.reasonPhrase());
if (!response.headers().isEmpty()) {
for (CharSequence name : response.headers().names()) {
for (CharSequence value : response.headers().getAll(name)) {
//LOGGER.info("HEADER: {} = {}", name, value);
}
}
//LOGGER.info("");
}
//response.headers().set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.CHUNKED);
HttpResponse srcRes = new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK);
if (HttpUtil.isTransferEncodingChunked(response)) {
responseChunked = true;
HttpUtil.setTransferEncodingChunked(srcRes, true);
ctxNettyToServer.channel().write(srcRes);
//ctx.channel().pipeline().addAfter(CHANNEL_NAME, "ChunkedWrite", new ChunkedWriteHandler());
} else {
ctxNettyToServer.channel().write(srcRes);
//ctx.channel().pipeline().remove("ChunkedWrite");
}
}
if (msg instanceof LastHttpContent) { // prioritize the subclass interface
ctx.close();
LOGGER.debug("ctxNettyToServer.channel().isWritable() = {}", ctxNettyToServer.channel().isWritable());
Thread.sleep(3000);
LOGGER.debug("ctxNettyToServer.channel().isWritable() = {}", ctxNettyToServer.channel().isWritable());
if(!responseChunked){
HttpContent content = (HttpContent) msg;
// https://github.com/netty/netty/blob/4.1/transport/src/main/java/io/netty/channel/SimpleChannelInboundHandler.java
// #see {#link SimpleChannelInboundHandler<I>#channelRead(ChannelHandlerContext, I)}
ctxNettyToServer.writeAndFlush(content.retain()).addListener(ChannelFutureListener.CLOSE);
}else{
ctxNettyToServer.close();
}
LOGGER.debug("ctxNettyToServer.channel().isWritable() = {}", ctxNettyToServer.channel().isWritable());
} else if (msg instanceof HttpContent) {
HttpContent content = (HttpContent) msg;
// We need to do a ReferenceCountUtil.retain() on the buffer to increase the reference count by 1
ctxNettyToServer.write(content.retain());
}
}
}
```
Related
on some day i decided to create a Netty Chat server using Tcp protocol. Currently, it successfully logging connect and disconnect, but channelRead0 in my handler is never fires. I tried Python client.
Netty version: 4.1.6.Final
Handler code:
public class ServerWrapperHandler extends SimpleChannelInboundHandler<String> {
private final TcpServer server;
public ServerWrapperHandler(TcpServer server){
this.server = server;
}
#Override
public void handlerAdded(ChannelHandlerContext ctx) {
System.out.println("Client connected.");
server.addClient(ctx);
}
#Override
public void handlerRemoved(ChannelHandlerContext ctx) {
System.out.println("Client disconnected.");
server.removeClient(ctx);
}
#Override
public void channelRead0(ChannelHandlerContext ctx, String msg) {
System.out.println("Message received.");
server.handleMessage(ctx, msg);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
System.out.println("Read complete.");
super.channelReadComplete(ctx);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
Output:
[TCPServ] Starting on 0.0.0.0:1052
Client connected.
Read complete.
Read complete.
Client disconnected.
Client code:
import socket
conn = socket.socket()
conn.connect(("127.0.0.1", 1052))
conn.send("Hello")
tmp = conn.recv(1024)
while tmp:
data += tmp
tmp = conn.recv(1024)
print(data.decode("utf-8"))
conn.close()
Btw, the problem was in my initializer: i added DelimiterBasedFrameDecoder to my pipeline, and this decoder is stopping the thread. I dont know why, but i dont needed this decoder, so i just deleted it, and everything started to work.
#Override
protected void initChannel(SocketChannel ch) throws Exception {
// Create a default pipeline implementation.
ChannelPipeline pipeline = ch.pipeline();
// Protocol Decoder - translates binary data (e.g. ByteBuf) into a Java object.
// Protocol Encoder - translates a Java object into binary data.
// Add the text line codec combination first,
pipeline.addLast("framer", new DelimiterBasedFrameDecoder(8192, Delimiters.lineDelimiter())); //<--- DELETE THIS
pipeline.addLast("decoder", new StringDecoder());
pipeline.addLast("encoder", new StringEncoder());
pipeline.addLast("handler", new ServerWrapperHandler(tcpServer));
}
------------ client-side pipeline (Client can send any type of messages i.e. HTTP requests, or binary packets)--------
Bootstrap bootstrap = new Bootstrap()
.group(group)
// .option(ChannelOption.TCP_NODELAY, true)
// .option(ChannelOption.SO_KEEPALIVE, true)
.channel(NioSocketChannel.class)
.handler(new ChannelInitializer() {
#Override
protected void initChannel(Channel channel) throws Exception {
channel.pipeline()
.addLast("agent-traffic-shaping", ats)
.addLast("length-decoder", new LengthFieldBasedFrameDecoder(Integer.MAX_VALUE, 0, 4, 0, 4))
.addLast("agent-client", new AgentClientHandler())
.addLast("4b-length", new LengthFieldPrepender(4))
;
}
});
------------------------------ Server-side pipeline-----------------
ServerBootstrap b = new ServerBootstrap()
.group(group)
// .option(ChannelOption.TCP_NODELAY, true)
// .option(ChannelOption.SO_KEEPALIVE, true)
.channel(NioServerSocketChannel.class)
.localAddress(new InetSocketAddress(port))
.childHandler(new ChannelInitializer() {
#Override
protected void initChannel(Channel channel) throws Exception {
channel.pipeline()
.addLast("agent-traffic-shaping", ats)
.addLast("length-decoder", new LengthFieldBasedFrameDecoder(Integer.MAX_VALUE, 0, 4, 0, 4))
.addLast(new AgentServerHandler())
.addLast("4b-length", new LengthFieldPrepender(4));
}
}
);
ChannelFuture f = b.bind().sync();
log.info("Started agent-side server at Port {}", port);
-------- Server's channelRead method-----------------
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
ByteBuf data = (ByteBuf) msg;
log.info("SIZE {}", data.capacity());
String s = data.readCharSequence(data.capacity(), Charset.forName("utf-8")).toString();
System.out.print(s);
if (buffer != null) buffer.incomingPacket((ByteBuf) msg);
else {
log.error("Receiving buffer NULL for Remote Agent {}:{} ", remoteAgentIP, remoteAgentPort);
((ByteBuf) msg).release();
}
/* totalBytes += ((ByteBuf) msg).capacity();*/
}
------------ Client writing on Channel (ByteBuf data contains valid HTTP request with size of 87 Bytes)--------
private void writeToAgentChannel(Channel currentChannel, ByteBuf data) {
String s = data.readCharSequence(data.capacity(), Charset.forName("utf-8")).toString();
log.info("SIZE {}", data.capacity());
System.out.print(s);
ChannelFuture cf = currentChannel.write(data);
currentChannel.flush();
/* wCount++;
if (wCount >= request.getRequest().getBufferSize() * request.getRequest().getNumParallelSockets()) {
for (Channel channel : channels)
channel.flush();
wCount = 0;
}*/
cf.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture channelFuture) throws Exception {
if (cf.isSuccess()) {
totalBytes += data.capacity();
}
else log.error("Failed to write packet to channel {}", cf.cause());
}
});
}
However Server receives an empty ByteBuf with size of zero. What could be possible cause here?
Your issue seems to come from the client, where you accidentally consume all bytes inside a bytebuf when you try to debug it.
String s = data.readCharSequence(data.capacity(), Charset.forName("utf-8")).toString();
log.info("SIZE {}", data.capacity());
System.out.print(s);
Calling readCharSequence consumes the data, leaving you with 0 bytes left.
I suggest using a DebugHandler to debug your pipeline, as that is tested not to affect the data
I am trying to implement a TCP server in java using Netty. I am able to handle message of length < 1024 correctly but when I receive message more than 1024, I am only able to see partial message.
I did some research, found that I should implement replayingdecoder but I am unable to understand how to implement the decode method
My message uses JSON
Netty version 4.1.27
protected void decode(ChannelHandlerContext channelHandlerContext, ByteBuf byteBuf, List<Object> list) throws Exception
My Server setup
EventLoopGroup group;
group = new NioEventLoopGroup(this.numThreads);
try {
ServerBootstrap serverBootstrap;
RequestHandler requestHandler;
ChannelFuture channelFuture;
serverBootstrap = new ServerBootstrap();
serverBootstrap.group(group);
serverBootstrap.channel(NioServerSocketChannel.class);
serverBootstrap.localAddress(new InetSocketAddress("::", this.port));
requestHandler = new RequestHandler(this.responseManager, this.logger);
serverBootstrap.childHandler(new ChannelInitializer<SocketChannel>() {
protected void initChannel(SocketChannel socketChannel) throws Exception {
socketChannel.pipeline().addLast(requestHandler);
}
});
channelFuture = serverBootstrap.bind().sync();
channelFuture.channel().closeFuture().sync();
}
catch(Exception e){
this.logger.info(String.format("Unknown failure %s", e.getMessage()));
}
finally {
try {
group.shutdownGracefully().sync();
}
catch (InterruptedException e) {
this.logger.info(String.format("Error shutting down %s", e.getMessage()));
}
}
My current request handler
package me.chirag7jain.Response;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.ChannelFutureListener;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;
import io.netty.util.CharsetUtil;
import org.apache.logging.log4j.Logger;
import java.net.InetSocketAddress;
public class RequestHandler extends ChannelInboundHandlerAdapter {
private ResponseManager responseManager;
private Logger logger;
public RequestHandler(ResponseManager responseManager, Logger logger) {
this.responseManager = responseManager;
this.logger = logger;
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ByteBuf byteBuf;
String data, hostAddress;
byteBuf = (ByteBuf) msg;
data = byteBuf.toString(CharsetUtil.UTF_8);
hostAddress = ((InetSocketAddress) ctx.channel().remoteAddress()).getAddress().getHostAddress();
if (!data.isEmpty()) {
String reply;
this.logger.info(String.format("Data received %s from %s", data, hostAddress));
reply = this.responseManager.reply(data);
if (reply != null) {
ctx.write(Unpooled.copiedBuffer(reply, CharsetUtil.UTF_8));
}
}
else {
logger.info(String.format("NO Data received from %s", hostAddress));
}
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
ctx.writeAndFlush(Unpooled.EMPTY_BUFFER).addListener(ChannelFutureListener.CLOSE);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
this.logger.info(String.format("Received Exception %s", cause.getMessage()));
ctx.close();
}
I would accept data in channelRead() and accumulate it in a buffer. Before return from channelRead() I would invoke read() on a Channel. You may need to record other data, as per your needs.
When netty invokes channelReadComplete(), there is a moment to send whole buffer to your ResponseManager.
Channel read(): Request to Read data from the Channel into the first
inbound buffer, triggers an
ChannelInboundHandler.channelRead(ChannelHandlerContext, Object) event
if data was read, and triggers a channelReadComplete event so the
handler can decide to continue reading.
Your Channel object is accessible by ctx.channel().
Try this code:
private final AttributeKey<StringBuffer> dataKey = AttributeKey.valueOf("dataBuf");
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ByteBuf byteBuf;
String data, hostAddress;
StringBuffer dataBuf = ctx.attr(dataKey).get();
boolean allocBuf = dataBuf == null;
if (allocBuf) dataBuf = new StringBuffer();
byteBuf = (ByteBuf) msg;
data = byteBuf.toString(CharsetUtil.UTF_8);
hostAddress = ((InetSocketAddress) ctx.channel().remoteAddress()).getAddress().getHostAddress();
if (!data.isEmpty()) {
this.logger.info(String.format("Data received %s from %s", data, hostAddress));
}
else {
logger.info(String.format("NO Data received from %s", hostAddress));
}
dataBuf.append(data);
if (allocBuf) ctx.attr(dataKey).set(dataBuf);
ctx.channel().read();
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
StringBuffer dataBuf = ctx.attr(dataKey).get();
if (dataBuf != null) {
String reply;
reply = this.responseManager.reply(dataBuf.toString());
if (reply != null) {
ctx.write(Unpooled.copiedBuffer(reply, CharsetUtil.UTF_8));
}
}
ctx.attr(dataKey).set(null);
ctx.writeAndFlush(Unpooled.EMPTY_BUFFER).addListener(ChannelFutureListener.CLOSE);
}
An application protocol with variable length messages must have:
a length word
a terminator character or sequence, which in turn implies an escape character in case the data contains the terminator
a self-describing protocol such as XML.
I am making a Curl post curl -X POST -d "dsds" 10.0.0.211:5201 to my Netty socket server but in my ChannelRead when I try to cast Object msg into FullHttpRequest It throws following exception.
java.lang.ClassCastException: io.netty.buffer.SimpleLeakAwareByteBuf cannot be cast to io.netty.handler.codec.http.FullHttpRequest
at edu.clemson.openflow.sos.host.netty.HostPacketHandler.channelRead(HostPacketHandler.java:42)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1320)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:905)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:563)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:504)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:418)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:390)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
at java.lang.Thread.run(Thread.java:748)
Following is my Socket Handler class
#ChannelHandler.Sharable
public class HostPacketHandler extends ChannelInboundHandlerAdapter {
private static final Logger log = LoggerFactory.getLogger(HostPacketHandler.class);
private RequestParser request;
public HostPacketHandler(RequestParser request) {
this.request = request;
log.info("Expecting Host at IP {} Port {}",
request.getClientIP(), request.getClientPort());
}
public void setRequestObject(RequestParser requestObject) {
this.request = requestObject;
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
// Discard the received data silently.
InetSocketAddress socketAddress = (InetSocketAddress) ctx.channel().remoteAddress();
log.info("Got Message from {} at Port {}",
socketAddress.getHostName(),
socketAddress.getPort());
//FullHttpRequest request = (FullHttpRequest) msg;
log.info(msg.getClass().getSimpleName());
//((ByteBuf) msg).release();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
// Close the connection when an exception is raised.
cause.printStackTrace();
ctx.close();
}
}
Pipeline:
public class NettyHostSocketServer implements IClientSocketServer {
protected static boolean isClientHandlerRunning = false;
private static final Logger log = LoggerFactory.getLogger(SocketManager.class);
private static final int CLIENT_DATA_PORT = 9877;
private static final int MAX_CLIENTS = 5;
private HostPacketHandler hostPacketHandler;
public NettyHostSocketServer(RequestParser request) {
hostPacketHandler = new HostPacketHandler(request);
}
private boolean startSocket(int port) {
NioEventLoopGroup group = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(group)
.channel(NioServerSocketChannel.class)
.localAddress(new InetSocketAddress(port))
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch)
throws Exception {
ch.pipeline().addLast(
hostPacketHandler);
}
});
ChannelFuture f = b.bind().sync();
log.info("Started host-side socket server at Port {}",CLIENT_DATA_PORT);
return true;
// Need to do socket closing handling. close all the remaining open sockets
//System.out.println(EchoServer.class.getName() + " started and listen on " + f.channel().localAddress());
//f.channel().closeFuture().sync();
} catch (InterruptedException e) {
log.error("Error starting host-side socket");
e.printStackTrace();
return false;
} finally {
//group.shutdownGracefully().sync();
}
}
#Override
public boolean start() {
if (!isClientHandlerRunning) {
isClientHandlerRunning = true;
return startSocket(CLIENT_DATA_PORT);
}
return true;
}
#Override
public int getActiveConnections() {
return 0;
}
}
I also used wireshark to check If I am getting valid packets or not. Below is the screenshot of Wireshark dump.
Your problem is that you never decode the ByteBuf into an actual HttpRequest object which is why you get an error. You can't cast a ByteBuf to a FullHttpRequest object.
You should do something like this:
#Override
public void initChannel(Channel channel) throws Exception {
channel.pipeline().addLast(new HttpRequestDecoder()) // Decodes the ByteBuf into a HttpMessage and HttpContent (1)
.addLast(new HttpObjectAggregator(1048576)) // Aggregates the HttpMessage with its following HttpContent into a FullHttpRequest
.addLast(hostPacketHandler);
}
(1) If you also want to send HttpResponse use this handler HttpServerCodec which adds the HttpRequestDecoder and HttpResponseEncoder.
I am going through netty's documentation here and the diagram here.
My question is, the Timeserver is writing time into the socket, for the client to read the time. Shouldn't it use the ChannelOutboundHandlerAdapter ? Why is the logic in ChannelInboundHandlerAdapter ?
Couldn't understand, please explain.
Timeserver,
public class TimeServerHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelActive(final ChannelHandlerContext ctx) { // (1)
final ByteBuf time = ctx.alloc().buffer(4); // (2)
time.writeInt((int) (System.currentTimeMillis() / 1000L + 2208988800L));
final ChannelFuture f = ctx.writeAndFlush(time); // (3)
f.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) {
assert f == future;
ctx.close();
}
}); // (4)
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
TimeClient,
public class TimeClient {
public static void main(String[] args) throws Exception {
String host = args[0];
int port = Integer.parseInt(args[1]);
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
Bootstrap b = new Bootstrap(); // (1)
b.group(workerGroup); // (2)
b.channel(NioSocketChannel.class); // (3)
b.option(ChannelOption.SO_KEEPALIVE, true); // (4)
b.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new TimeClientHandler());
}
});
// Start the client.
ChannelFuture f = b.connect(host, port).sync(); // (5)
// Wait until the connection is closed.
f.channel().closeFuture().sync();
} finally {
workerGroup.shutdownGracefully();
}
}
}
The reason we use a ChannelInboundHandlerAdapter is because, we are writing into the channel which was established by the client to the server. Since its inbound with respect to the server, we use a ChannelInboundHandlerAdapter. The Client connects to the Server, through the channel into which the server sends out the time.
Because your server will respond to incoming messages, it will need to implement interface ChannelInboundHandler, which defines methods for acting on inbound events.
Further, ChannelInboundHandlerAdapter has a straightforward API, and each of its methods can be overridden to hook into the event lifecycle at the appropriate point.
I just started learning Netty, hope this helps. :)