AsynchronousSocketChannel#read is never completing. - java

I'm experimenting with NIO2 and running into an issue.
Here's the code I'm using:
ByteBuffer buffer = ByteBuffer.allocate(DEFAULT_BUFFER_SIZE);
channel.read(buffer, null, new CompletionHandler<Integer, Object>() {
#Override
public void completed(Integer result, Object attachment) {
Packet packet = new Packet(buffer.getInt(), buffer);
PacketHandler handler = PacketHandler.forOpcode(packet.getOpcode());
if(!Objects.isNull(handler)) {
handler.handle(channel, packet);
} else {
System.out.println("Unexpected opcode received from client. Opcode: " + packet.getOpcode());
}
}
#Override
public void failed(Throwable exc, Object attachment) {
System.out.println("DEBUG A");
exc.printStackTrace();
}
});
The issue is that no-matter what I send the server, it never completes. For testing purposes I have a very flat-format login packet set up and I'm sending this data through the client:
ByteBuffer buffer = ByteBuffer.allocate(28);
buffer.putInt(1); //opcode
ByteBufferUtils.putString(buffer, "admin");
ByteBufferUtils.putString(buffer, "admin");
channel.write(buffer);
Even though the client writes the data, the server never finishes reading this. I've also made sure that (DEFAULT_BUFFER_SIZE) was equal to the sent buffer size to see if that was the issue, but there were still not any changes in functionality.
Whenever I disconnect the client (Currently using a thread to keep it alive, for absolutely no reason) I get the following print stack trace from #failed
java.io.IOException: The specified network name is no longer available.
at sun.nio.ch.Iocp.translateErrorToIOException(Iocp.java:309)
at sun.nio.ch.Iocp.access$700(Iocp.java:46)
at sun.nio.ch.Iocp$EventHandlerTask.run(Iocp.java:399)
at java.lang.Thread.run(Thread.java:745)

You aren't sending anything. You need to flip() the buffer before calling write(), and compact() it afterwards.

Related

Event driven and asynchronous serial port communication simultaneously

I'm completely new to serial port communication and need some help grasping it.
I need to communicate with a control board. This board can sometimes send events that I need to react to, and I need to send events to the board and await a response.
We have established a protocol where each event is always 12 bytes and the first 2 bytes determine the event type.
I know that when I send a specific message, I need to await a message with specific signifying bytes. At the same time I want it to be possible to react to events that are sent from the board. For instance the board might say that it is overheating, and at the same time I'm asking it to perform some command and reply.
My question is, if I write to the port and block for a second while awaiting the expected response, how I do ensure I don't "steal" the data my listener expects? E.g. do a serial ports work like a stream, where once I've read I've advanced past the point where it can be re-read.
I've done some implementation of this using jSerialComm, hopefully this can shed some light on my question.
First a listener that is registered using the addDataListener method. I want this to trigger when an event is present on the port that starts with "T".
private static LockerSerialPort getLockerSerialPort(final DeviceClient client) {
return MySerialPort.create(COM_PORT)
.addListener(EventListener.newBuilder()
.addEventHandler(createLocalEventHandler())
.build());
}
private static EventHandler createLocalEventHandler() {
return new EventHandler() {
#Override
public void execute(final byte[] event) {
System.out.println(new String(event));
}
#Override
public byte[] getEventIdentifier() {
// I want this listener to be executed when events that start with T are sent to the port
return "T".getBytes();
}
#Override
public String getName() {
return "T handler";
}
};
}
Next, I want to be able to write to the port and immediately get the response because it is needed to know if the command was successful or not.
private byte[] waitForResponse(final byte[] bytes) throws LockerException {
write(bytes);
return blockingRead();
}
private void write(final byte[] bytes) throws LockerException {
try (var out = serialPort.getOutputStream()) {
out.write(bytes);
} catch (final IOException e) {
throw Exception.from(e, "Failed to write to serial port %s", getComPort());
}
}
public byte[] blockingRead() {
return blockingRead(DEFAULT_READ_TIMEOUT);
}
private byte[] blockingRead(final int readTimeout) {
serialPort.setComPortTimeouts(SerialPort.TIMEOUT_READ_SEMI_BLOCKING, readTimeout, 0);
try {
byte[] readBuffer = new byte[PACKET_SIZE];
final int bytesRead = serialPort.readBytes(readBuffer, readBuffer.length);
if (bytesRead != PACKET_SIZE) {
throw RuntimeException.from(null, "Expected %d bytes in packet, got %d", PACKET_SIZE, bytesRead);
}
return readBuffer;
} catch (final Exception e) {
throw RuntimeException.from(e, "Failed to read packet within specified time (%d ms)", readTimeout);
}
}
When I call waitForResponse("command"), how do I know my blocking read doesn't steal data from my listener?
Are these two patterns incompatible? How would one usually handle a scenario like this?

Netty server send a byte[] encoded by Protobuf, but C# client Socket.Receive keeps being 0

I am trying to accomplish an Unity game demo with network function, using C# for programming of client, and Java for server.
To be specific, server communication is implemented by Netty.
I also brought in Protobuf, which helps me define protocols of messages.
As I am new to server programming, dealing with packet merging and loss in TCP has not been considered in my code yet.
When I created sockets from client, and sent message to server, everything went well.
Problem happened when server replied:
In the client, an async method is ready to receive message. When I simply sent a string-format message from server, the method were able to get it.
But when I replaced the message with a 4-length byte[], which encoded from a Protobuf Message object, client just showed that it received NOTHING.
when I print what I've sent in the server console, it is like this:
00001000
00000001
00010000
00000001
My server code overrides channelRead and channelReadComplete functions of Netty.
In channelRead, ChannelHandlerContext.write was invoked to write the message to the transmission cache.
And in channelReadComplete, ChannelHandlerContext.flush was invoked, so that the message could be sent finally.
channelRead()
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
Request.MsgPack msgPack = (Request.MsgPack) msg;
Request.MsgPack.MsgType type = msgPack.getType();
switch (type)
{
case GetServerState:
final Request.GetServerState gssbody = msgPack.getGetServerState();
System.out.println("收到类型为" + type + "的消息,内容为:" +
"\nrequestId = " + gssbody.getRequestId()
);
byte[] bytes = ServerStateManager.getState(gssbody.getRequestId());
ctx.write(bytes);
break;
getState(): including Protobuf-encoding procedure
public static byte[] getState(int requestId)
{
ReturnServerState.Message.Builder replyBuilder = ReturnServerState.Message.newBuilder();
replyBuilder.setRequestId(requestId);
replyBuilder.setIsIdle(new ServerStateManager().isIdle());
return replyBuilder.build().toByteArray();
}
channelReadComplete()
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
try
{
ctx.flush();
}
finally
{
ctx.close();
}
}
Client code:
public class ShortLink
{
Socket clientSocket = null;
static byte[] result = new byte[1024];
Task ReceiveAsync<T>(string ip, int port)
{
return Task.Run(() =>
{
T component = default(T);
while (clientSocket.Receive(result) == 0)
{
break;
ReceiveAsync is invoked in the way of:
await ReceiveAsync<ReturnServerState>(ip, port);
when I found clientSocket.Receive(result) always output 0, I tried to log result[0], result[1], result[2], result[3] like this:
Debug.Log(Convert.ToString(result[0]) + ", " +
Convert.ToString(result[1]) + ", " +
Convert.ToString(result[2]) + ", " +
Convert.ToString(result[3]));
And the log turned to be 0,0,0,0.
I will be grateful for any idea of "why the client socket received nothing", and the solution.
Since I come from Asia, there may be a time lag between your reply and mine, and also English is not my mother tongue. However, I will try my best to reply in time.
Thanks a lot!
Okay..I have finally solve it myself
1.The usage "return replyBuilder.build().toByteArray()" is wrong because ProtoEncoder has already do toByteArray() for me:
public class ProtobufEncoder extends MessageToMessageEncoder<MessageLiteOrBuilder> {
public ProtobufEncoder() {
}
protected void encode(ChannelHandlerContext ctx, MessageLiteOrBuilder msg, List<Object> out) throws Exception {
if (msg instanceof MessageLite) {
out.add(Unpooled.wrappedBuffer(((MessageLite)msg).toByteArray()));
} else {
if (msg instanceof Builder) {
out.add(Unpooled.wrappedBuffer(((Builder)msg).build().toByteArray()));
}
}
}
}
So once I registered "new ProtobufEncoder()" in the Netty Channel Pipeline, I can just use "return replyBuilder.build()" - that is correct.
2.In "static byte[] result = new byte[1024];", The length of received message is defined casually, and it doesn't matter - until it really receives a message.
When receiving message, I shall always copy the message bytes to a new byte[] with a correct length firstly - or there will be just a 1024-length bytes[], with the data I need at the beginning, and several zeroes following, which will certainly fail to be decoded.

socket messages being split by netty

I wrote a REST server based on netty 4. The client handler looks something like the following.
The bytebuffer capacity in the msg provided by netty varies. When the client message is larger than the buffer the message gets split. What I find is that both channelRead and ChannelReadComplete get called for each fragment. What I usually see is that the ByteBuf is around 512, and the message around 600. I get a channelRead for the first 512 bytes, followed by a ChannelReadComplete for them, and then another channelRead for the remaining 100 bytes and a channelReadComplete for them - 2 messages instead of 1.
I found a few related questions here, but I am wondering what is the point of channelReadComplete? Is it really called after every channelRead? As long as there are bytes available, shouldn't they be read in before channelReadComplete is called?
public class ClientHandler extends ChannelInboundHandlerAdapter {
....
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
Report.debug("Read from client");
ByteBuf buf = (ByteBuf) msg;
String contents = buf.toString(io.netty.util.CharsetUtil.US_ASCII);
ReferenceCountUtil.release(msg);
ClientConnection client = ClientConnection.get(ctx);
if (client != null) {
client.messageText(contents); // adds text to buffer
return;
}
((parse serial number from contents, process registration))
ClientConnection.online(serialNumber, ctx); // register success, create the client object
}
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
ClientConnection client = ClientConnection.get(ctx);
if (client == null)
Report.debug("completed read of message from unregistered client");
else {
Report.debug("completed read of message from client " + client.serialNumber());
String contents = client.messageText();
... ((process message))
}
}
}
channelReadComplete is NOT called after each channelRead. The netty event loop will read from NIO socket and fire multiple channelRead until no more data to read or it should give up, then channelReadComplete is fired.
Yes, channelReadComplete() is called after each channelRead() in the pipeline has finished. If an exception occurs in channelRead() then it will jump to the method ecxeptionCaught().
So you should put code into channelReadComplete() that you only want to have executed on a successful channelRead().
For example this is what our project does:
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
// compute msg
ctx.fireChannelRead(msg); //tells the next handler
//in pipeline (if existing) to read the channel
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
ctx.writeAndFlush("OK");
ctx.fireChannelReadComplete();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
logger.error(Message.RCV_ERROR, cause.getMessage());
ctx.writeAndFlush(cause.getMessage());
ctx.close();
}
If the Client receives something different than "OK" then he doesn't have to send the rest.
If you're looking for a method that gets called after all packages have arrived then:
#Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
//close the writer that wrote the message to file (for example)
}
EDIT: You could also try sending bigger packages. The message size is controlled by the client, I think.

RabbitMQ Java Client Asynchronous Topic Receipt

I have a program that sends and receives messages over an exchange. My program needs to continue execution regardless of whether there is a message for it in the queue. Almost all the tutorials have blocking examples:
while (true) {
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
System.out.println("Message: " + new String(delivery.getBody()));
ch.basicAck(delivery.getEnvelope().getDeliveryTag(), false);
}
I came across what I understand to be the asynchronous version i.e., the handleDelivery function is called (callback) when a message is available in the queue:
boolean autoAck = false;
channel.basicConsume(queueName, autoAck, "myConsumerTag",
new DefaultConsumer(channel) {
#Override
public void handleDelivery(String consumerTag,
Envelope envelope,
AMQP.BasicProperties properties,
byte[] body)
throws IOException
{
String routingKey = envelope.getRoutingKey();
String contentType = properties.contentType;
long deliveryTag = envelope.getDeliveryTag();
// (process the message components here ...)
channel.basicAck(deliveryTag, false);
}
});
After reading over the documentation I'm still unsure whether the above code snippet is indeed asynchronous and I still can't figure out how to get the actual message that was sent. Some help please.
With out trying the second code snippet I can say that it is possible it does what you want. However it presumably does this but using a thread internally (which will be blocked while waiting for a new message). What I do is stick the while loop in a new Thread so that only that thread is blocked and the rest of your program continues asyncrhonously

Netty slower than Tomcat

We just finished building a server to store data to disk and fronted it with Netty. During load testing we were seeing Netty scaling to about 8,000 messages per second. Given our systems, this looked really low. For a benchmark, we wrote a Tomcat front-end and run the same load tests. With these tests we were getting roughly 25,000 messages per second.
Here are the specs for our load testing machine:
Macbook Pro Quad core
16GB of RAM
Java 1.6
Here is the load test setup for Netty:
10 threads
100,000 messages per thread
Netty server code (pretty standard) - our Netty pipeline on the server is two handlers: a FrameDecoder and a SimpleChannelHandler that handles the request and response.
Client side JIO using Commons Pool to pool and reuse connections (the pool was sized the same as the # of threads)
Here is the load test setup for Tomcat:
10 threads
100,000 messages per thread
Tomcat 7.0.16 with default configuration using a Servlet to call the server code
Client side using URLConnection without any pooling
My main question is why such a huge different in performance? Is there something obvious with respect to Netty that can get it to run faster than Tomcat?
Edit: Here is the main Netty server code:
NioServerSocketChannelFactory factory = new NioServerSocketChannelFactory();
ServerBootstrap server = new ServerBootstrap(factory);
server.setPipelineFactory(new ChannelPipelineFactory() {
public ChannelPipeline getPipeline() {
RequestDecoder decoder = injector.getInstance(RequestDecoder.class);
ContentStoreChannelHandler handler = injector.getInstance(ContentStoreChannelHandler.class);
return Channels.pipeline(decoder, handler);
}
});
server.setOption("child.tcpNoDelay", true);
server.setOption("child.keepAlive", true);
Channel channel = server.bind(new InetSocketAddress(port));
allChannels.add(channel);
Our handlers look like this:
public class RequestDecoder extends FrameDecoder {
#Override
protected ChannelBuffer decode(ChannelHandlerContext ctx, Channel channel, ChannelBuffer buffer) {
if (buffer.readableBytes() < 4) {
return null;
}
buffer.markReaderIndex();
int length = buffer.readInt();
if (buffer.readableBytes() < length) {
buffer.resetReaderIndex();
return null;
}
return buffer;
}
}
public class ContentStoreChannelHandler extends SimpleChannelHandler {
private final RequestHandler handler;
#Inject
public ContentStoreChannelHandler(RequestHandler handler) {
this.handler = handler;
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
ChannelBuffer in = (ChannelBuffer) e.getMessage();
in.readerIndex(4);
ChannelBuffer out = ChannelBuffers.dynamicBuffer(512);
out.writerIndex(8); // Skip the length and status code
boolean success = handler.handle(new ChannelBufferInputStream(in), new ChannelBufferOutputStream(out), new NettyErrorStream(out));
if (success) {
out.setInt(0, out.writerIndex() - 8); // length
out.setInt(4, 0); // Status
}
Channels.write(e.getChannel(), out, e.getRemoteAddress());
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
Throwable throwable = e.getCause();
ChannelBuffer out = ChannelBuffers.dynamicBuffer(8);
out.writeInt(0); // Length
out.writeInt(Errors.generalException.getCode()); // status
Channels.write(ctx, e.getFuture(), out);
}
#Override
public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent e) {
NettyContentStoreServer.allChannels.add(e.getChannel());
}
}
UPDATE:
I've managed to get my Netty solution to within 4,000/second. A few weeks back I was testing a client side PING in my connection pool as a safe guard against idle sockets but I forgot to remove that code before I started load testing. This code effectively PINGed the server every time a Socket was checked out from the pool (using Commons Pool). I commented that code out and I'm now getting 21,000/second with Netty and 25,000/second with Tomcat.
Although, this is great news on the Netty side, I'm still getting 4,000/second less with Netty than Tomcat. I can post my client side (which I thought I had ruled out but apparently not) if anyone is interested in seeing that.
The method messageReceived is executed using a worker thread that is possibly getting blocked by RequestHandler#handle which may be busy doing some I/O work.
You could try adding into the channel pipeline an OrderdMemoryAwareThreadPoolExecutor (recommended) for executing the handlers or alternatively, try dispatching your handler work to a new ThreadPoolExecutor and passing a reference to the socket channel for later writing the response back to client. Ex.:
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
executor.submit(new Runnable() {
processHandlerAndRespond(e);
});
}
private void processHandlerAndRespond(MessageEvent e) {
ChannelBuffer in = (ChannelBuffer) e.getMessage();
in.readerIndex(4);
ChannelBuffer out = ChannelBuffers.dynamicBuffer(512);
out.writerIndex(8); // Skip the length and status code
boolean success = handler.handle(new ChannelBufferInputStream(in), new ChannelBufferOutputStream(out), new NettyErrorStream(out));
if (success) {
out.setInt(0, out.writerIndex() - 8); // length
out.setInt(4, 0); // Status
}
Channels.write(e.getChannel(), out, e.getRemoteAddress());
}

Categories