First, I'll explain the situation and the logic that I'm trying to implement:
I have multiple threads, each put result of it work, some object called Result into queue QueueToSend
My NettyClient runs in thread and takes Result from QueueToSend every 1 milisecond and should connect to server and send a message, that is created from Result.
I also need this connections to be asynch. So I need the Result list to be known by NettyHandler to send right message and process right result and then again send response.
So I initialize NettyClient bootstrap
bootstrap = new ClientBootstrap(
new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()));
and sets pipeline once when app starts.
Then, every milisecond I take Result object from QueueToSend and connect to server
ChannelFuture future = bootstrap.connect(new InetSocketAddress(host,port);
ResultConcurrentHashMap.put(future.getChannel().getId(), result);
I decided to use static ConcurrentHashMap to save every result object taken from QueueToSend assosiated with channel.
The first problem takes place in NettyHandler in method channelConnected, when I am trying to take Result object assosiated with channel from ResultConcurrentHashMap.
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) {
Channel channel = ctx.getPipeline.getChannel();
Result result = ResultConcurrentHashMap.get(channel.getId());
}
But sometimes result is null (1 of 50), even thought it should be in ResultConcurrentHashMap. I think it happens cause that channelConnected event happens before NettyClient runs this code:
ResultConcurrentHashMap.put(future.getChannel().getId(), result);
May be it will not appear if I run NettyServer and NettyClient not on localhost both, but remotely, it will take moretime to estabilish the connection. But I need a solution for this issue.
Another issue is that I am sending messages every 1 milisecond asynchromously and I suppose that messages are may be mixed and server can not read them properly. If I run them one by one it will be ok :
future.getChannel().getCloseFuture().awaitUninterruptibly();
But I need asynchromus sending, and process right results, assosiated with channel and send responses.
What should I implement?
ChannelFutures are executed asynchronously before the events get fired. For example channel connect future will be completed before firing the channel connected event.
So you have to register a channel future listener after calling bootstrap.connect() and write your code in the listener to initialize the HashMap, then it will be visible to the handler.
ChannelFuture channelFuture = bootstrap.connect(remoteAddress, localAddress);
channelFuture.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
resultConcurrentHashMap.put(future.getChannel().getId(), result);
}
});
Related
I am using Netty in one of my projects. There is a flow in which I have to send some data to the client from some other thread (other netty's worker thread):
final ChannelFuture writeAndFlushFuture = ctx.channel().writeAndFlush(out)
.addListener(new GenericFutureListener<Future<? super Void>>() {
#Override
public void operationComplete(Future<? super Void> future) throws Exception {
LOGGER.info("=========> dsgdsfgdsfgdfsgdfsgsdgfd");
}
});
Also, I am waiting on client response after it receives my payload. I associated a timeout with it as well, so that whenever client doesn't reply back within time frame, I close the context from the server (assuming erroneous connection).
There is something strange going on. Whenever I send the payload to the client, I am not getting this printed in my logs
dsgdsfgdsfgdfsgdfsgsdgfd
But, it's getting printed just before I am closing the connection due to timeout. I am not sure what I might be doing wrong as to not send the payload right away, but just before closing the connection.
What could be happening here?
Actually, there was a flush missing when I was writing object to context:
ctx.writeAndFlush(commands);
Now, its working fine. But strange, it was working fine when was being executed within Netty's worker thread, even without flush.
I have a consumer which reads data from a topic and spawns a thread for processing. At a single point of time there can be multiple messages being processed in the server. The application encountered DB timeouts and all the messages being processed were lost. And since there were multiple threads polling for DB connection, the application threw out of memory exception and went down.
How can I improve the architecture to remove data loss even if consumer goes down without processing
You should do At-Least-Once processing by committing the offsets after you complete your processing.
i.e Do
consumer.commitSync();
After your the thread completes successfully.
Note that you also need to configure the consumer to stop commmiting the offset automatically by setting ‘enable.auto.commit’ to false.
You need to be careful though that your consumer is Idempotent. i.e If it fails, and reads and processes the same value again, it will not effect the outcome.
You should commit the offset after getting a successful response from DB.
The issue is related to the available database connection and thread. The only way to handle this issue is to get a database connection and then send the database connection to the thread.
Thread Example
public class ConsumerThreadHandler implements Callable {
private ConsumerRecord consumerRecord;
private Connection dataBaseConnection;
public ConsumerThreadHandler(ConsumerRecord consumerRecord,) {
this.consumerRecord = consumerRecord;
this.dataBaseConnection = dataBaseConnection;
}
#Override
public Object call() throws Exception {
// Perform all the data base related things
// and generate the proper response
return;
}
}
Consumer Code
executor = new ThreadPoolExecutor(numberOfThreads, numberOfThreads, 0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<>(), new ThreadPoolExecutor.CallerRunsPolicy());
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
for (final ConsumerRecord record : records) {
// Get database connection , Check untill get the connection or maintain the connection pool and based on available connection move next.
Future future=executor.submit(new ConsumerThreadHandler(record,dataBaseConnection));
if(future.isDone())
// Based on the proper response commit the offset
}
}
}
You can go through the following simple example.
https://howtoprogram.xyz/2016/05/29/create-multi-threaded-apache-kafka-consumer/
I am using the StreamObserver class found in the grpc-java project to set up some bidirectional streaming.
When I run my program, I make an undetermined number of requests to the server, and I only want to call onCompleted() on the requestObserver once I have finished making all of the requests.
Currently, to solve this, I am using a variable "inFlight" to keep track of the requests that have been issued, and when a response comes back, I decrement "inFlight". So, something like this.
// issuing requests
while (haveRequests) {
MessageRequest request = mkRequest();
this.requestObserver.onNext(request);
this.inFlight++;
}
// response observer
StreamObserver<Message> responseObserver = new StreamObserver<Message> {
#Override
public void onNext(Message response) {
if (--this.onFlight == 0) {
this.requestObserver.onCompleted();
}
// work on message
}
// other methods
}
A bit pseudo-codey, but this logic works. However, I would like to get rid of the "inFlight" variable if possible. Is there anything within the StreamObserver class that allows this sort of functionality, without the need of an additional variable to track state? Something that would tell the number of requests issued and when they completed.
I've tried inspecting the object within the intellij IDE debugger, but nothing is popping out to me.
To answer your direct question, you can simply call onComplete from the while loop. All the messages passed to onNext. Under the hood, gRPC will send what is called a "half close", indicating that it won't send any more messages, but it is willing to receive them. Specifically:
// issuing requests
while (haveRequests) {
MessageRequest request = mkRequest();
this.requestObserver.onNext(request);
this.inFlight++;
}
requestObserver.onCompleted();
This ensures that all responses are sent, and in the order that you sent them. On the server side, when it sees the corresponding onCompleted callback, it can half-close its side of the connection by calling onComplete on its observer. (There are two observers on the server side one for receiving info from the client, one for sending info).
Back on the client side, you just need to wait for the server to half close to know that all messages were received and processed. Note that if there were any errors, you would get an onError callback instead.
If you don't know how many requests you are going to make on the client side, you might consider using an AtomicInteger, and call decrementAndGet when you get back a response. If the return value is 0, you'll know all the requests have completed.
I'm trying to implement distributed actor model which uses Netty as communication protocol - the NIO version with TCP connections.
Lets say we have 2 nodes (machines), each have Netty's server instances that pass the incoming messages to actors on that node.
I would like to keep message ordering per same pair of remote actors, so my solution was to use asynchronous writeAndFlush method to send messages to remote node and actor - when another message needs to be sent to the same actor before the first one was delivered, I would add it to buffer and with the callback of the writeAndFlush message, process the next one from buffer. It looks like this:
channel.writeAndFlush(message).addListener(new MessageListener(mailboxOfSelector));
the callback method is:
#Override
public void operationComplete(ChannelFuture future) throws Exception {
Queue<RemoteMessage> unsentToMailbox = unsentMessages.get(mailboxOfSelector);
if (!unsentToMailbox.isEmpty()) {
RemoteMessage message = unsentToMailbox.poll();
channel.writeAndFlush(message).addListener(this);
}
}
So if A and B are 2 server instances connected with Channel and we send from A -> B - my question would be: what does isSuccess flag mean in depth? and when does the callback actually return?
Does it return when it finished with last handler on A or actually when it is delivered to the first handler on the B?
In Netty5. Version alpha2. after flush the data to socketchannel,Netty then callback the operationComplete method .In this case, it dose not means the data reach the client. It means the data has been sent to the TCP protocol stack.You can see these in source code:
io.netty.channel.ChannelOutboundBuffer.java
It will calls the promise.trySuccess() from the remove() method or remove(Cause cause), witch can trigger the operationComplete() method.
Regarding Java NIO2.
Suppose we have the following to listen to client requests...
asyncServerSocketChannel.accept(null, new CompletionHandler <AsynchronousSocketChannel, Object>() {
#Override
public void completed(final AsynchronousSocketChannel asyncSocketChannel, Object attachment) {
// Put the execution of the Completeion handler on another thread so that
// we don't block another channel being accepted.
executer.submit(new Runnable() {
public void run() {
handle(asyncSocketChannel);
}
});
// call another.
asyncServerSocketChannel.accept(null, this);
}
#Override
public void failed(Throwable exc, Object attachment) {
// TODO Auto-generated method stub
}
});
This code will accept a client connection process it and then accept another.
To communicate with the server the client opens up an AsyncSocketChannel and fires the message.
The Completion handler completed() method is then invoked.
However, this means if the client wants to send another message on the same AsyncSocket instance it can't.
It has to create another AsycnSocket instance - which I believe means another TCP connection - which is performance hit.
Any ideas how to get around this?
Or to put the question another way, any ideas how to make the same asyncSocketChannel receive multipe CompleteionHandler completed() events?
edit:
My handling code is like this...
public void handle(AsynchronousSocketChannel asyncSocketChannel) {
ByteBuffer readBuffer = ByteBuffer.allocate(100);
try {
// read a message from the client, timeout after 10 seconds
Future<Integer> futureReadResult = asyncSocketChannel.read(readBuffer);
futureReadResult.get(10, TimeUnit.SECONDS);
String receivedMessage = new String(readBuffer.array());
// some logic based on the message here...
// after the logic is a return message to client
ByteBuffer returnMessage = ByteBuffer.wrap((RESPONSE_FINISHED_REQUEST + " " + client
+ ", " + RESPONSE_COUNTER_EQUALS + value).getBytes());
Future<Integer> futureWriteResult = asyncSocketChannel.write(returnMessage);
futureWriteResult.get(10, TimeUnit.SECONDS);
} ...
So that's it my server reads a message from the async channe and returns an answer.
The client blocks until it gets the answer. But this is ok. I don't care if client blocks.
Whent this is finished, client tries to send another message on same async channel and it doesn't work.
There are 2 phases of connection and 2 different kind of completion handlers.
First phase is to handle a connection request, this is what you have programmed (BTW as Jonas said, no need to use another executor). Second phase (which can be repeated multiple times) is to issue an I/O request and to handle request completion. For this, you have to supply a memory buffer holding data to read or write, and you did not show any code for this. When you do the second phase, you'll see that there is no such problem as you wrote: "if the client wants to send another message on the same AsyncSocket instance it can't".
One problem with NIO2 is that on one hand, programmer have to avoid multiple async operations of the same kind (accept, read, or write) on the same channel (or else an error occur), and on the other hand, programmer have to avoid blocking wait in handlers. This problem is solved in df4j-nio2 subproject of the df4j actor framework, where both AsyncServerSocketChannel and AsyncSocketChannel are represented as actors. (df4j is developed by me.)
First, you should not use an executer like you have in the completed-method. The completed-method is already handled in a new worker-thread.
In your completed-method for .accept(...), you should call asychSocketChannel.read(...) to read the data. The client can just send another message on the same socket. This message will be handled with a new call to the completed-method, perhaps by another worker-thread on your server.