Can CompletableFuture can be used to create non-blocking I/O calls? - java

I am using CompletableFuture to make a web service call:
CompletableFuture<List<ProductCatalog>> completeableFuture =
CompletableFuture.supplyAsync(() -> restTemplate.getForEntity(serviceURL, Response.class))
.thenApply(responseEntity -> buildResponse(responseEntity.getBody()));
Using the above code, I can see that the JVM assigns a pool worker (onPool-worker-1) thread to call the web service, so it is working as intended:
2018-03-12 10:11:08.402 INFO 14726 --- [nio-9020-exec-1] c.a.m.service.products.config.LogAspect : Entering in Method : productsCatalogClass Name : com.arrow.myarrow.service.products.controller.ProductsControllerArguments : []Target class : com.arrow.myarrow.service.products.controller.ProductsController
2018-03-12 10:11:43.561 INFO 14726 --- [onPool-worker-1] c.a.m.s.p.s.ProductsCatalogService : listProductCatalog has top level elements - 8
Since the web service call can be time consuming, I am using the CompletableFuture.supplyAsync method to invoke the web service. Can someone answer the following questions:
Will the main thread be unblocked and free for other stuff?
Is it really non-I/O code?
Will there be any performance implications or advantages?
I am running my application as a Spring Boot application on Tomcat. The main inspiration behind using CompletableFuture is that when the web service (a third-party system) is down, all my incoming requests won't get blocked. As Tomcat by default has a thread pool of size 200, will the system be blocked if 200 users are waiting for a response from the above call, or will there be any difference from using the above code?

That's not non-blocking I/O. It's just regular blocking I/O done in a separate thread. What's worse is if you don't explicitly provide an executor, the CompletableFutures will be executed in the common ForkJoinPool which will fill up quickly.
If you use an asynchronous servlet and a separate executor, you can have long running operations without using threads from Tomcat's worker pool. However this is only useful if the other requests can finish in some way. If all requests are offloaded to the custom executor, you'll just be switching one pool for another.

Related

How to limit number of threads all concurrent methods at once

I am working on a legacy project which uses Java 8, Spring, HikariCP, and MySQL. Microservices' methods are triggered with a Kafka topic and start a reporting operation. Almost all triggered methods have this and some of them have the same usage inside their blocks.
new ForkJoinPool().submit(() -> { users.parallelStream().forEach(user ->
The application creates 8-9k threads and all of them try to get or create a record. However, the database couldn't handle these requests and started to throw exceptions and Zabbix sends mails about heap memory usage above %90:
Caused by: java.sql.SQLTransientConnectionException: HikariPool-2 -
Connection is not available, request timed out after 30000ms.
When I check the database and see the variable for max_connections = 600, but this is not enough.
I want to set a limit for thread count for the application level.
I tried setting these parameters but the thread size doesn't decrease.
SPRING_TASK_EXECUTION_POOL_QUEUE-CAPACITY , SPRING_TASK_EXECUTION_POOL_MAX-SIZE, -Djava.util.concurrent.ForkJoinPool.common.parallelism
Is there any property to solve this problem?
I have changed all new ForkJoinPool() to ForkJoinPool.commonPool() and use this parameter to control thread creation -Djava.util.concurrent.ForkJoinPool.common.parallelism after that I have fixed my problem.

How to let tomcat main thread get more clients' request instead of sleeping for future.isDone()

I read about Callable and Futures in java.
I want to run a java web-server that does some background callable.
I have a few questions:
1) future.get() blocks my main thread.
while (!future.isDone()) {
Thread.sleep(1000);
}
is a busy wait.
How should I release the main thread (tomcat thread) to dispatch more clients' requests?
If it sleeps in a while it somehow context switched to get more clients' requests?
2) In general, how does tomcat makes the server stateless?
meaning each field of my Java server class is thread safe because a new instance is created for each client request?

Why does Netty's ExecutorUtil.terminate() call ExecutionHandler.shutdownNow()?

I've to manage a client-server-application with > 1K connected clients using netty 3.5.1. Sometimes updates get lost which are written to database when we disconnect our clients through restarting our servers. When performing a restart/shutdown, we shutdown our Netty components like this:
shutdown server-channel
disconnect all clients (via ChannelGroupFuture)
call releaseExternalResources() on our ChannelPipeline
call releaseExternalResources() on our ExecutionHandler which is part of our ChannelPipeline (is it necessary to invoke it manually?)
However I wonder why ExecutorUtil.terminate (which is called by the ExecutionHandler) does a shutdownNow on the passed ExecutorService, because shutdownNow drains all existing tasks in the queue and returns them. The tasks won't be executed because ExecutorUtil.terminate is of type void. Wouldn't it be more appropriate to invoke shutdown on the ExecutorService and wait for the completion?
That's a good suggestion.. Would you mind open a issue for it on our issue tracker[1]
[1 ]https://github.com/netty/netty/issues

Call a Web Service from Servlet at AppEngine

Question: What is best way to call a web service (0.5-1.5 seconds/call) from a servlet at AppEngine? Are blocking calls are scalable at AppEngine environment?
Context: I am developing a web application using AppEngine and J2EE. The applications calls Amazon web service to grab some information for the user. From my asp.net experience, best way to do the calls - is to use async http handler to prevent starvation at IIS thread pool. This feature is not available for J2EE with Servlet 2.5 spec (3.0 is planned).
Right now I am thinking of making my controllers (and servlets) thread safe and request scoped. Is there anything also that I can do? Is it even an issue in J2EE + AppEngine environment?
EDIT: I am aware of AppEngine and JAX-WS async invocation support, but I am not sure how it play with servlet environment. As far as I understand, to complete servlet request, the code still should wait for async WS call completion (callback or whatever).
I assume that doing it using synchronization primitives will block current working thread.
So, as far as thread is blocked, to serve another user request servlet container need to allocate new thread in thread pool, allocate new memory for stack and waste time for context switching. Moreover, requests can block entire server, when we run out of threads in thread pool. This assumptions are based on ASP.Net and IIS thread model. Are they applicable to J2EE environment?
ANSWER: After studying Apache and GAE documentation, it seems that starvation of threads in the thread pool is not a real issue. Apache, by default has 200 threads for thread pool (compared to 25 in asp.NET and IIS). Based on this I can infer that threads are rather cheap in JVM.
In case if async processing is really required or servlet container will run out of threads, it's possible to redesign the application to send response via google channel api.
The workflow will look like:
Make sync request to servlet
Servlet makes creates channel for async reply and queues task for background worker
Servlet returns response to client
[Serving other requests]
Background worker does processing and pushes data to client via channel api
As you observe, servlets don't support using a single thread to service multiple concurrent requests - one thread is required per request. The best way to do your HTTP call is to use asynchronous urlfetch, and wait on that call to complete when you need the result. This will block the request's thread, but there's no avoiding that - the thread is dedicated to the current request until it terminates no matter what you do.
If you don't need the response from the API call to serve the user's request, you could use the task queue to do the work offline, instead.
Isn't it OK to use fetchAsync?
looks at this, this might help
http://today.java.net/pub/a/today/2006/09/19/asynchronous-jax-ws-web-services.html
I am not sure, If you can exactly replicate what you do in dot net, Here is what you could do to may be to simulate it page on load
Submit an ajax request to controller using a java script body onload
In the controller start the async task and send the response back the user and use a session token to keep track of the task
You can poll the controller (add another method to ask for update of the task, since you have session token to track the task) until u get the response
You can do this either waiting for response page or hidden frame that keeps polling the controller
Once you have the response that you are looking for remove the session token
If you want to do that would be the best option instead of polling would be ideal in this case Reverse Ajax / server push
Edit: Now I understand what you mean, I think you can have your code execute async task not wait for response from async itself, just send response back to the user. I have simple thread that I will start but will wait for it to finish as I send the response back to the user and the same time use a session token to track the request
#Controller
#RequestMapping("/asyncTest")
public class AsyncCotroller {
#RequestMapping(value = "/async.html", method = RequestMethod.GET)
public ModelAndView dialogController(Model model, HttpServletRequest request)
{
System.err.println("(System.currentTimeMillis()/1000) " + (System.currentTimeMillis()/1000));
//start a thread (async simulator)
new Thread(new MyRunnbelImpl()).start();
//use this attribute to track response
request.getSession().setAttribute("asyncTaskSessionAttribute", "asyncTaskSessionAttribute");
//if you look at the print of system out, you will see that it is not waiting on //async task
System.err.println("(System.currentTimeMillis()/1000) " + (System.currentTimeMillis()/1000));
return new ModelAndView("test");
}
class MyRunnbelImpl implements Runnable
{
#Override
public void run()
{
try
{
Thread.sleep(5000);
} catch (InterruptedException e)
{
e.printStackTrace();
}
}
}
}

Servlet 3.0 asynchronous

What's the diffrence between servlet 3.0 asynchronous feature against:
Оld servlet implementation:
doGet(request,response) {
Thread t = new Thread(new Runnable()
void run(){
// heavy processing
response.write(result)
}
}
t.start();
In servlet 3.0 if I waste a thread to do heavy processing - I earn one more thread in the container, but I waste it in heavy processing... :(
Could somebody help?
This won't work. Once your doGet method ends, the response is complete and sent back to the client. Your thread may or may not still be running, but it can't change the response any longer.
What the new async feature in Servlet 3.0 does, is that it allows you to free the request thread for processing another request. What happens is the following:
RequestThread: |-- doGet() { startAsync() } // Thread free to do something else
WorkerThread: |-- do heavy processing --|
OtherThread: |-- send response --|
The important thing is that once RequestThread has started asynchronous processing via a call to startAsync(...), it is free to do something else. It can accept new requests, for example. This improves throughput.
There are several API-s supporting COMET (long living HTTP requests, where there is no thread/request problem) programming. So there is no strict need to use servlet 3 API for avoiding thread/request. One is the Grizzly engine which is running in Glassfish 2.11 (example). Second solution is Jetty Continuation. The third is Servlet 3 API..
The basic concept is that the request creates some container managed asynchronous handler in which the request can subscribe to an event identified by an object (for example a clientid string). Then the asynchronous processing thread once can say to the handler, that the event occours, and the request gets a thread to continue. It totally depends on your choosen application server wich API you can use. Which is your choice?
The servlet 3.0 async feature provides to keep the http connection open but to release any unused threads when the request cannot be served immediately but is waiting for some event to occur or for example when you are writing some comet/reverse ajax application.,In the above case you are creating a new thread completely so it should not make any difference for you unless you want to keep the request waiting for some event.
Best Regards,
Keshav
Creating your own threads in a servlet container is asking for trouble. (There might be cases where you have to do it, but if you have some framework that manages the threads for you, then you should use it.)

Categories