Netty not sending certain character sequences - java

I started to familiarize myself with Netty since I plan to use it in a future project.
But I stumbled upon some weird behaviour.
Since I will be using a text protocol for the project, I started with the standard "text" pipeline with StringDecoder,StringEncoder and DelimiterBasedFrameDecoder. But now I have reduced this to the following:
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
Bootstrap b = new Bootstrap();
b.group(workerGroup);
b.channel(NioSocketChannel.class);
b.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new TestClientHandler());
}
});
ChannelFuture f = b.connect("some.working.web.server", 80).sync();
f.channel().closeFuture().sync();
} finally {
workerGroup.shutdownGracefully();
}
And my TestClient is:
public static class TestClientHandler extends
SimpleChannelInboundHandler<ByteBuf> {
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
System.out.println("channel active");
ctx.writeAndFlush(Unpooled.copiedBuffer(
"GET /index.html HTTP/1.0\r\n", CharsetUtil.US_ASCII));
System.out.println("after write");
}
#Override
public void channelRead0(ChannelHandlerContext ctx, ByteBuf msg) {
System.out.println("got" + msg);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
I looked at the traffic with Wireshark and when this is executed only the TCP handshake is performed and nothing is sent. After a while the program exits (when the HTTP server closes the connection).
The weirdest thing is that if I change the line:
ctx.writeAndFlush(Unpooled.copiedBuffer(
"GET /index.html HTTP/1.0\r\n", CharsetUtil.US_ASCII));
to
ctx.writeAndFlush(Unpooled.copiedBuffer(
"GET /index.html HTTPA1.0\r\n", CharsetUtil.US_ASCII));
Then the "request" gets sent to the server which of course rejects it and replies with an error.
I played with the "request string" a bit more and it seems that Netty for some reason does not like the following:
ctx.writeAndFlush(Unpooled.copiedBuffer(
" HTTP/1.0\n", CharsetUtil.US_ASCII));
This string does not get sent. But removing the leading whitespace, or changing a random character gets the string sent to the server.
And to make things even stranger, if I test this with a SMTP server, the "request" gets sent without problems to the server. The only difference is that the SMTP server sends the HELO message before my string is sent to the server...
The behaviour is also same with Netty 4.0.23 and 4.1.0.Beta3 on Java 1.7.0_21 and 1.8.0_20.
And also stays the same if I change to OioEventLoopGroup and OioSocketChannel
Also changing the channelActive method a bit:
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
System.out.println("channel active");
ByteBuf bb = Unpooled.buffer();
bb.writeBytes("GET index.htm HTTP/1.0\r\n".getBytes());
System.out.println(ByteBufUtil.hexDump(bb));
ctx.writeAndFlush(bb);
System.out.println("after write");
}
Shows that my "request" got turned into proper sequence of bytes:
47455420696e6465782e68746d20485454502f312e300d0a
G E T i n d e x . h t m H T T P / 1 . 0 \r\n
I would appreciate if someone has an explanation for this wierdness...

This problem turned out to be caused by the antivirus software.
Apparently the software that I am using is using a hacky parser for HTTP monitoring and it ate my "request" which was in fact invalid... Missing extra \r\n at the end.
Turning antivirus off solved the problem. And also correcting the request to:
GET /index.html HTTP/1.0\r\n\r\n
Kept antivirus happy.
But why it decided to eat
GET /index.html HTTP/1.0\r\n
and not
GET /index.html HTTPA/1.0\r\n
is beyond me...

Related

Netty Correlating Request and Responses

I want to write a proxy for a TCP binary protocol. I’m using the HexDump example in Netty’s repo as a guide.
https://github.com/netty/netty/tree/4.1/example/src/main/java/io/netty/example/proxy
This works fine. But I sometimes want to modify the response based on the original request.
Looking around it seems that using the inbound channels AttributeMap could be the place to store such request details. (Some more details below)
io.netty.util.AttributeMap
But while it sort of works sometimes one request overwrites the details of another request.
This makes sense, Netty is asynchronous and you can’t really guarantee when somethings going to happen.
So I was wondering how can I reliably correlate each request with is response. Note I can’t
change the protocol, this might have been one way to pass details between request and response.
Thanks for your insight.
HexDumpFrontendHandler
#Override
public void channelRead(final ChannelHandlerContext ctx, Object msg) throws InterruptedException {
…
ctx.channel().attr(utils.REQUEST_ATTRIBUTE).set(requestDetails);
…
}
#Override
public void channelActive(ChannelHandlerContext ctx) {
final Channel inboundChannel = ctx.channel();
// Start the connection attempt.
Bootstrap b = new Bootstrap();
b.group(inboundChannel.eventLoop())
.channel(ctx.channel().getClass())
.handler(new HexDumpBackendHandler(inboundChannel))
.option(ChannelOption.AUTO_READ, false);
ChannelFuture f = b.connect(remoteHost, remotePort);
outboundChannel = f.channel();
f.addListener((ChannelFutureListener) future -> {
if (future.isSuccess()) {
// connection complete start to read first data
inboundChannel.read();
} else {
// Close the connection if the connection attempt has failed.
inboundChannel.close();
}
});
}
HexDumpBackendHandler
#Override
public void channelRead(final ChannelHandlerContext ctx, Object msg) {
…
RequestDetails requestDetails = inboundChannel.attr(utils.REQUEST_ATTRIBUTE).getAndRemove();
…
}
My solution (work around?) to this was the following. The protocol I was working with couldn't guarantee a unique identifier per request globally but it did uniquely identify request's within a tcp connection.
So the following combination allowed me to create a ConcurrentHashMap with the following as the key
host + ephemeral port + identifier local to the connection
This work for my case. I'm sure their other ways to solve it within the Netty framework itself

Netty server send a byte[] encoded by Protobuf, but C# client Socket.Receive keeps being 0

I am trying to accomplish an Unity game demo with network function, using C# for programming of client, and Java for server.
To be specific, server communication is implemented by Netty.
I also brought in Protobuf, which helps me define protocols of messages.
As I am new to server programming, dealing with packet merging and loss in TCP has not been considered in my code yet.
When I created sockets from client, and sent message to server, everything went well.
Problem happened when server replied:
In the client, an async method is ready to receive message. When I simply sent a string-format message from server, the method were able to get it.
But when I replaced the message with a 4-length byte[], which encoded from a Protobuf Message object, client just showed that it received NOTHING.
when I print what I've sent in the server console, it is like this:
00001000
00000001
00010000
00000001
My server code overrides channelRead and channelReadComplete functions of Netty.
In channelRead, ChannelHandlerContext.write was invoked to write the message to the transmission cache.
And in channelReadComplete, ChannelHandlerContext.flush was invoked, so that the message could be sent finally.
channelRead()
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
Request.MsgPack msgPack = (Request.MsgPack) msg;
Request.MsgPack.MsgType type = msgPack.getType();
switch (type)
{
case GetServerState:
final Request.GetServerState gssbody = msgPack.getGetServerState();
System.out.println("收到类型为" + type + "的消息,内容为:" +
"\nrequestId = " + gssbody.getRequestId()
);
byte[] bytes = ServerStateManager.getState(gssbody.getRequestId());
ctx.write(bytes);
break;
getState(): including Protobuf-encoding procedure
public static byte[] getState(int requestId)
{
ReturnServerState.Message.Builder replyBuilder = ReturnServerState.Message.newBuilder();
replyBuilder.setRequestId(requestId);
replyBuilder.setIsIdle(new ServerStateManager().isIdle());
return replyBuilder.build().toByteArray();
}
channelReadComplete()
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
try
{
ctx.flush();
}
finally
{
ctx.close();
}
}
Client code:
public class ShortLink
{
Socket clientSocket = null;
static byte[] result = new byte[1024];
Task ReceiveAsync<T>(string ip, int port)
{
return Task.Run(() =>
{
T component = default(T);
while (clientSocket.Receive(result) == 0)
{
break;
ReceiveAsync is invoked in the way of:
await ReceiveAsync<ReturnServerState>(ip, port);
when I found clientSocket.Receive(result) always output 0, I tried to log result[0], result[1], result[2], result[3] like this:
Debug.Log(Convert.ToString(result[0]) + ", " +
Convert.ToString(result[1]) + ", " +
Convert.ToString(result[2]) + ", " +
Convert.ToString(result[3]));
And the log turned to be 0,0,0,0.
I will be grateful for any idea of "why the client socket received nothing", and the solution.
Since I come from Asia, there may be a time lag between your reply and mine, and also English is not my mother tongue. However, I will try my best to reply in time.
Thanks a lot!
Okay..I have finally solve it myself
1.The usage "return replyBuilder.build().toByteArray()" is wrong because ProtoEncoder has already do toByteArray() for me:
public class ProtobufEncoder extends MessageToMessageEncoder<MessageLiteOrBuilder> {
public ProtobufEncoder() {
}
protected void encode(ChannelHandlerContext ctx, MessageLiteOrBuilder msg, List<Object> out) throws Exception {
if (msg instanceof MessageLite) {
out.add(Unpooled.wrappedBuffer(((MessageLite)msg).toByteArray()));
} else {
if (msg instanceof Builder) {
out.add(Unpooled.wrappedBuffer(((Builder)msg).build().toByteArray()));
}
}
}
}
So once I registered "new ProtobufEncoder()" in the Netty Channel Pipeline, I can just use "return replyBuilder.build()" - that is correct.
2.In "static byte[] result = new byte[1024];", The length of received message is defined casually, and it doesn't matter - until it really receives a message.
When receiving message, I shall always copy the message bytes to a new byte[] with a correct length firstly - or there will be just a 1024-length bytes[], with the data I need at the beginning, and several zeroes following, which will certainly fail to be decoded.

Netty NIO SSL error when running out of filedescriptors

I am running a server performance test using a Netty based tester app as a client. The connection is over SSL sockets, where I send a registration and the server starts streaming data back. So I try to create as many connections as the server can handle.
I get up to about 4000 sockets on my tester until it (client OS process) runs out of filedescriptors due to too many open sockets. This would be fine if I got the proper error message from Netty. However, the only thing Netty gives me is: java.nio.channels.ClosedChannelException. This does not even have a stacktrace.
After various runs with the debugger, I believe this is due to io.netty.handler.ssl.SslHandler handling such errors as:
private static final ClosedChannelException CHANNEL_CLOSED = new ClosedChannelException();
static {CHANNEL_CLOSED.setStackTrace(EmptyArrays.EMPTY_STACK_TRACE);}
#Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
// Make sure to release SSLEngine,
// and notify the handshake future if the connection has been closed during handshake.
setHandshakeFailure(ctx, CHANNEL_CLOSED);
super.channelInactive(ctx);
}
In the end this results in throwing the ClosedChannelException with no stacktrace. If I run this over the debugger and set breakpoints in Netty earlier, this seems to be due to SSL Handshake timeout. I believe this timeout is due to running out of the filedescriptors. No idea why Netty treats it as a timeout though.
The reason for me to believe that it is running out of filedescriptors is due to earlier versions of this test getting exceptions for too many open files in the system. However, after reducing the use of files elsewhere in the code it now gets this far but I no longer get a meaningful error message. I also still get the error about too many open files if I run other software concurrently that keeps opening files at the time Netty hangs on this.
I am wondering if there is some trick for me to get Netty to properly report the actual cause of failure?
Here is the main relevant client init code:
private static final EventLoopGroup group = new NioEventLoopGroup();
public SSLClientNetty() throws Exception {
SSLContext context = SSLContext.getInstance("TLS");
context.init(keyManagers, trustManagers, null);
SSLEngine sslEngine = context.createSSLEngine();
sslEngine.setUseClientMode(true);
SslHandler sslHandler = new SslHandler(sslEngine);
//this is the time Netty waits before throwing the ClosedChannelException after reaching file limit
sslHandler.setHandshakeTimeoutMillis(5000);
try {
Bootstrap b = new Bootstrap();
b.group(group)
.channel(NioSocketChannel.class)
.handler(new MyInitializer(sslHandler));
ch = b.connect("localhost", 5555).sync().channel();
} catch (Exception e) {
log.error("Error connecting to server", e);
throw new RuntimeException("Error connecting to server", e);
}
}
The main relevant code for MyInitializer:
#Override
protected void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(sslHandler);
ch.pipeline().addLast("bytesEncoder", new ByteArrayEncoder());
ch.pipeline().addLast(new MyDecoder());
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
super.exceptionCaught(ctx, cause);
log.error("Error in initializing connection", cause);
}
In MyDecoder, just to make sure I also log any Exceptions:
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
super.exceptionCaught(ctx, cause);
log.error("Error in decoder", cause);
throw new RuntimeException("Error in decoder", cause);
}
The main loop for tester creating connections:
while (true) {
SSLClientNetty client = new SSLClientNetty();
client.register();
Thread.sleep(10);
}
Right now the error msg is only this:
4000:...................java.nio.channels.ClosedChannelException
Where the tester is printing a dot for every successfully opened connection, and it ends with Netty throwing ClosedChannelException with no stacktrace (as explained above).
So, just to re-iterate, I am looking to get a better error report for what is actually causing the connections to fail. And to understand how Netty handles running out of sockets/how do I manage that..?

socket messages being split by netty

I wrote a REST server based on netty 4. The client handler looks something like the following.
The bytebuffer capacity in the msg provided by netty varies. When the client message is larger than the buffer the message gets split. What I find is that both channelRead and ChannelReadComplete get called for each fragment. What I usually see is that the ByteBuf is around 512, and the message around 600. I get a channelRead for the first 512 bytes, followed by a ChannelReadComplete for them, and then another channelRead for the remaining 100 bytes and a channelReadComplete for them - 2 messages instead of 1.
I found a few related questions here, but I am wondering what is the point of channelReadComplete? Is it really called after every channelRead? As long as there are bytes available, shouldn't they be read in before channelReadComplete is called?
public class ClientHandler extends ChannelInboundHandlerAdapter {
....
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
Report.debug("Read from client");
ByteBuf buf = (ByteBuf) msg;
String contents = buf.toString(io.netty.util.CharsetUtil.US_ASCII);
ReferenceCountUtil.release(msg);
ClientConnection client = ClientConnection.get(ctx);
if (client != null) {
client.messageText(contents); // adds text to buffer
return;
}
((parse serial number from contents, process registration))
ClientConnection.online(serialNumber, ctx); // register success, create the client object
}
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
ClientConnection client = ClientConnection.get(ctx);
if (client == null)
Report.debug("completed read of message from unregistered client");
else {
Report.debug("completed read of message from client " + client.serialNumber());
String contents = client.messageText();
... ((process message))
}
}
}
channelReadComplete is NOT called after each channelRead. The netty event loop will read from NIO socket and fire multiple channelRead until no more data to read or it should give up, then channelReadComplete is fired.
Yes, channelReadComplete() is called after each channelRead() in the pipeline has finished. If an exception occurs in channelRead() then it will jump to the method ecxeptionCaught().
So you should put code into channelReadComplete() that you only want to have executed on a successful channelRead().
For example this is what our project does:
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
// compute msg
ctx.fireChannelRead(msg); //tells the next handler
//in pipeline (if existing) to read the channel
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
ctx.writeAndFlush("OK");
ctx.fireChannelReadComplete();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
logger.error(Message.RCV_ERROR, cause.getMessage());
ctx.writeAndFlush(cause.getMessage());
ctx.close();
}
If the Client receives something different than "OK" then he doesn't have to send the rest.
If you're looking for a method that gets called after all packages have arrived then:
#Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
//close the writer that wrote the message to file (for example)
}
EDIT: You could also try sending bigger packages. The message size is controlled by the client, I think.

Netty slower than Tomcat

We just finished building a server to store data to disk and fronted it with Netty. During load testing we were seeing Netty scaling to about 8,000 messages per second. Given our systems, this looked really low. For a benchmark, we wrote a Tomcat front-end and run the same load tests. With these tests we were getting roughly 25,000 messages per second.
Here are the specs for our load testing machine:
Macbook Pro Quad core
16GB of RAM
Java 1.6
Here is the load test setup for Netty:
10 threads
100,000 messages per thread
Netty server code (pretty standard) - our Netty pipeline on the server is two handlers: a FrameDecoder and a SimpleChannelHandler that handles the request and response.
Client side JIO using Commons Pool to pool and reuse connections (the pool was sized the same as the # of threads)
Here is the load test setup for Tomcat:
10 threads
100,000 messages per thread
Tomcat 7.0.16 with default configuration using a Servlet to call the server code
Client side using URLConnection without any pooling
My main question is why such a huge different in performance? Is there something obvious with respect to Netty that can get it to run faster than Tomcat?
Edit: Here is the main Netty server code:
NioServerSocketChannelFactory factory = new NioServerSocketChannelFactory();
ServerBootstrap server = new ServerBootstrap(factory);
server.setPipelineFactory(new ChannelPipelineFactory() {
public ChannelPipeline getPipeline() {
RequestDecoder decoder = injector.getInstance(RequestDecoder.class);
ContentStoreChannelHandler handler = injector.getInstance(ContentStoreChannelHandler.class);
return Channels.pipeline(decoder, handler);
}
});
server.setOption("child.tcpNoDelay", true);
server.setOption("child.keepAlive", true);
Channel channel = server.bind(new InetSocketAddress(port));
allChannels.add(channel);
Our handlers look like this:
public class RequestDecoder extends FrameDecoder {
#Override
protected ChannelBuffer decode(ChannelHandlerContext ctx, Channel channel, ChannelBuffer buffer) {
if (buffer.readableBytes() < 4) {
return null;
}
buffer.markReaderIndex();
int length = buffer.readInt();
if (buffer.readableBytes() < length) {
buffer.resetReaderIndex();
return null;
}
return buffer;
}
}
public class ContentStoreChannelHandler extends SimpleChannelHandler {
private final RequestHandler handler;
#Inject
public ContentStoreChannelHandler(RequestHandler handler) {
this.handler = handler;
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
ChannelBuffer in = (ChannelBuffer) e.getMessage();
in.readerIndex(4);
ChannelBuffer out = ChannelBuffers.dynamicBuffer(512);
out.writerIndex(8); // Skip the length and status code
boolean success = handler.handle(new ChannelBufferInputStream(in), new ChannelBufferOutputStream(out), new NettyErrorStream(out));
if (success) {
out.setInt(0, out.writerIndex() - 8); // length
out.setInt(4, 0); // Status
}
Channels.write(e.getChannel(), out, e.getRemoteAddress());
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
Throwable throwable = e.getCause();
ChannelBuffer out = ChannelBuffers.dynamicBuffer(8);
out.writeInt(0); // Length
out.writeInt(Errors.generalException.getCode()); // status
Channels.write(ctx, e.getFuture(), out);
}
#Override
public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent e) {
NettyContentStoreServer.allChannels.add(e.getChannel());
}
}
UPDATE:
I've managed to get my Netty solution to within 4,000/second. A few weeks back I was testing a client side PING in my connection pool as a safe guard against idle sockets but I forgot to remove that code before I started load testing. This code effectively PINGed the server every time a Socket was checked out from the pool (using Commons Pool). I commented that code out and I'm now getting 21,000/second with Netty and 25,000/second with Tomcat.
Although, this is great news on the Netty side, I'm still getting 4,000/second less with Netty than Tomcat. I can post my client side (which I thought I had ruled out but apparently not) if anyone is interested in seeing that.
The method messageReceived is executed using a worker thread that is possibly getting blocked by RequestHandler#handle which may be busy doing some I/O work.
You could try adding into the channel pipeline an OrderdMemoryAwareThreadPoolExecutor (recommended) for executing the handlers or alternatively, try dispatching your handler work to a new ThreadPoolExecutor and passing a reference to the socket channel for later writing the response back to client. Ex.:
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
executor.submit(new Runnable() {
processHandlerAndRespond(e);
});
}
private void processHandlerAndRespond(MessageEvent e) {
ChannelBuffer in = (ChannelBuffer) e.getMessage();
in.readerIndex(4);
ChannelBuffer out = ChannelBuffers.dynamicBuffer(512);
out.writerIndex(8); // Skip the length and status code
boolean success = handler.handle(new ChannelBufferInputStream(in), new ChannelBufferOutputStream(out), new NettyErrorStream(out));
if (success) {
out.setInt(0, out.writerIndex() - 8); // length
out.setInt(4, 0); // Status
}
Channels.write(e.getChannel(), out, e.getRemoteAddress());
}

Categories