I've to manage a client-server-application with > 1K connected clients using netty 3.5.1. Sometimes updates get lost which are written to database when we disconnect our clients through restarting our servers. When performing a restart/shutdown, we shutdown our Netty components like this:
shutdown server-channel
disconnect all clients (via ChannelGroupFuture)
call releaseExternalResources() on our ChannelPipeline
call releaseExternalResources() on our ExecutionHandler which is part of our ChannelPipeline (is it necessary to invoke it manually?)
However I wonder why ExecutorUtil.terminate (which is called by the ExecutionHandler) does a shutdownNow on the passed ExecutorService, because shutdownNow drains all existing tasks in the queue and returns them. The tasks won't be executed because ExecutorUtil.terminate is of type void. Wouldn't it be more appropriate to invoke shutdown on the ExecutorService and wait for the completion?
That's a good suggestion.. Would you mind open a issue for it on our issue tracker[1]
[1 ]https://github.com/netty/netty/issues
Related
I am reading the documentation about Channel.basicCancel operation in rabbitmq https://www.rabbitmq.com/consumer-cancel.html . The docs says that one of possible cancellation case is when consumer sends cancel signal on the same channel on which it is listening.
Is this the only possibility? Can you cancel remote consumer running on different channel/connection/process?
I am trying to send the cancel request from another another process. When I do it ends with an exception java.io.IOException: Unknown consumerTag just like such operation was restricted to cancelling local consumers (on own channel or connection).
UPDATE:
I noticed that this "Unknown consumerTag" exception is a result of initial validation inside com.rabbitmq.client.impl.ChannelN.basicCancel(String):
Consumer originalConsumer = (Consumer)this._consumers.get(consumerTag);
if (originalConsumer == null) {
throw new IOException("Unknown consumerTag");
}
...
But still there might be some rpc call which does the trick...
The RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
The documentation is correct, you must cancel a consumer from its own channel/connection.
Other options include making your consumers aware of "cancellation messages" that will cause them to stop themselves, or using the API to close an entire connection, which will close all channels associated with it.
I am using CompletableFuture to make a web service call:
CompletableFuture<List<ProductCatalog>> completeableFuture =
CompletableFuture.supplyAsync(() -> restTemplate.getForEntity(serviceURL, Response.class))
.thenApply(responseEntity -> buildResponse(responseEntity.getBody()));
Using the above code, I can see that the JVM assigns a pool worker (onPool-worker-1) thread to call the web service, so it is working as intended:
2018-03-12 10:11:08.402 INFO 14726 --- [nio-9020-exec-1] c.a.m.service.products.config.LogAspect : Entering in Method : productsCatalogClass Name : com.arrow.myarrow.service.products.controller.ProductsControllerArguments : []Target class : com.arrow.myarrow.service.products.controller.ProductsController
2018-03-12 10:11:43.561 INFO 14726 --- [onPool-worker-1] c.a.m.s.p.s.ProductsCatalogService : listProductCatalog has top level elements - 8
Since the web service call can be time consuming, I am using the CompletableFuture.supplyAsync method to invoke the web service. Can someone answer the following questions:
Will the main thread be unblocked and free for other stuff?
Is it really non-I/O code?
Will there be any performance implications or advantages?
I am running my application as a Spring Boot application on Tomcat. The main inspiration behind using CompletableFuture is that when the web service (a third-party system) is down, all my incoming requests won't get blocked. As Tomcat by default has a thread pool of size 200, will the system be blocked if 200 users are waiting for a response from the above call, or will there be any difference from using the above code?
That's not non-blocking I/O. It's just regular blocking I/O done in a separate thread. What's worse is if you don't explicitly provide an executor, the CompletableFutures will be executed in the common ForkJoinPool which will fill up quickly.
If you use an asynchronous servlet and a separate executor, you can have long running operations without using threads from Tomcat's worker pool. However this is only useful if the other requests can finish in some way. If all requests are offloaded to the custom executor, you'll just be switching one pool for another.
Is it right to say that - java gRPC server thread will still run even after the DEADLINE time. But, gRPC server will stop/block that thread only from making any subsequent gRPC calls since the DEADLINE time has crossed?
If the above is a correct statement, then is there a way to stop / block the thread making any Redis / DB calls as well for which DEADLINE time has crossed ? Or once the DEADLINE time is crossed, interrupt the thread immedietly?
Is it right to say that - java gRPC server thread will still run even after the DEADLINE time.
Correct. Java doesn't offer any real alternatives.
But, gRPC server will stop/block that thread only from making any subsequent gRPC calls since the DEADLINE time has crossed?
Mostly. Outgoing gRPC calls observe the io.grpc.Context, which means deadlines and cancellations are propagated (unless you fail to propagate Context to another thread or use Context.fork()).
If the above is a correct statement, then is there a way to stop / block the thread making any Redis / DB calls as well for which DEADLINE time has crossed ? Or once the DEADLINE time is crossed, interrupt the thread immedietly?
You can listen for the Context cancellation via Context.addListener(). The gRPC server will cancel the Context when the deadline expires and if the client cancels the RPC. This notification is how outgoing RPCs are cancelled.
I will note that thread interruption is a bit involved to perform without racing. If you want interruption and don't have a Future already, I suggest wrapping your work in a FutureTask (and simply calling FutureTask.run() on the current thread) in order to get its non-racy cancel(true) implementation.
final FutureTask<Void> future = new FutureTask<Void>(work, null);
Context current = Context.current();
CancellationListener listener = new CancellationListener() {
#Override public void cancelled(Context context) {
future.cancel(true);
}
};
current.addListener(listener);
future.run();
current.removeListener(listener);
You can check Context.isCancelled() before making Redis / DB queries, and throw StatusException(CANCELLED) if it has.
I'm using a Spring WebFlux WebSocketClient to subscribe to and handle messages from a remote web socket. During processing the Flux of messages from the remote socket will sometimes unexpectedly complete (or terminate on error) causing the web socket client's onComplete (or onError) callback to execute. When this occurs, my onComplete and onError callbacks publish an event. An event listener responds by calling the function that creates another web socket client which connects to the same external web socket and the socket processing starts over again.
My problem is that I cannot figure out how to free the WebSocketClient resources after a client completes processing. This causes unused threads to accumulate in the JVM. In particular the threads on which the first WebSocketClient were running (WebSocketClient-SecureIO-1, WebSocketClient-SecureIO-2 and parallel-1) remain in a waiting state and new threads are started for the new 'WebSocketClient'. I thought calling close() on WebSocketSession would solve the problem, but it does not.
The pattern of my implementation is:
public void startProcessing() {
WebSocketClient client = new StandardWebSocketClient();
Mono<String> subscribeMsg = Mono.just("...");
client
.execute(uri, webSocketSession ->
webSocketSession
.send(subscribeMsg.map(webSocketSession::textMessage))
.thenMany(webSocketSession.receive())
.map(webSocketMessage -> ...)
.buffer(Duration.ofSeconds(bufferDuration))
.doOnNext(handler)
.doOnComplete(() -> webSocketSession.close())
.then())
.subscribe(
aVoid -> LOGGER.info("subscription started"),
throwable -> {... publish restart event ...},
() -> {... publish restart event ...});
}
public void restartEventListener() {
startProcessing();
}
Any suggestions on how I can prevent unused WebSocketClient threads from accumulating in the JVM?
A few ideas:
A WebSocketClient is pooling resources, so you should reuse the same client for many requests.
You should avoid doing processing inside doOn* operators. Those are side-effects operators and are executed synchronously on the current Scheduler. Fore more efficiency, you should use other operators. You could map the websocket message to a Flux<DataBuffer> and then use DataBufferUtils::write to write those to a file and still leverage the same reactive pipeline instead of using the side-effects operators.
Closing the websocket session in one of those is not a bad idea, although I'd use doOnTerminate which is triggered for both success and error scenarios.
Also I don't understand the goal of publishing events to restart the processing phase. Using the retry and repeat operators and the same client should work just fine and be more efficient.
I am using a 3rd party blocking API. I am going to be using this API as follows:
while(true){
blockingAPI();
sendResultSomewhere();
}
blockingAPI() polls a server for a specific property until it gets a response.
In order to make things asynchronous to some extent I could spawn this API call within a separate thread. and have a callback implemented in Java to handle the response. I was wondering if I can use the netty framework in this scenario, and how I could do this? The examples I have seen involve a server that listens and communicates with a client, and I am not sure how my use case fits in.
If netty cannot be used, would my best bet be spawning a new thread and implementing a callback in Java?
Not sure what you really try to do:
Spawn internally a new thread: you could use LocalChannel with Netty to have intra-JVM process communication and therefore having something like you want, without any network consideration (only within the JVM). The blockingAPI will be computed within ServerLocalChannel side, while the result will be written once the client get back a response through the same LocalChannel.
Spawn but with a request from outside (network), then Netty could of course be used too there. Maybe still keeping a LocalChannel logic to separate network to compute.
Note that I could recommand to use asynchronous operation using LocalChannel (executing the blocking task), such that the send somewhere else is done without blocking the Netty's Network IO thread.
Network Handler side:
localChannel = creationWithinNetworkHandler(networkChannelCtx);
localChannel.writeAndFlush(something);
while LocalChannel handler server side could be as:
void read0(ChannelHandlerContext ctx, someData) {
blockingAPI();
ctx.channel().writeAndFlush(answear).addFutureListener(Channels.CloseFuture);
}
and LocalChannel handler client side could be as:
void read0(ChannelHandlerContext ctx, answear) {
//Using ctx from Network channel side
networkCtx.writeAndFlush(answear);
}