how asynchronous servlet processing improve performance - java

I read from http://docs.oracle.com/javaee/7/tutorial/doc/servlets012.htm
Java EE provides asynchronous processing support for servlets and filters. If a servlet or a filter reaches a potentially blocking operation when processing a request, it can assign the operation to an asynchronous execution context and return the thread associated with the request immediately to the container without generating a response. The blocking operation completes in the asynchronous execution context in a different thread, which can generate a response or dispatch the request to another servlet.
I am wondering where is the
different thread
come from? Assuming the container has 10 threads, 5 of them are processing request, we have to use another 5 to process the long running business logic?. where do we get the performance improvement? the total threads available to use is limited, right?
Thanks.

Read more about Servlet 3 0 final-spec - Section 2.3.3.3 - Asynchronous processing where it is explained in detail.
It causes the container to dispatch a thread, possibly from a managed thread pool, to run the specified Runnable. AsyncContext is a standard way defined in Servlet 3.0 specification to handle HTTP requests asynchronously.
Basically HTTP request is no longer tied to an HTTP thread, allowing us to handle it later, possibly using fewer threads. It turned out that the specification provides an API to handle asynchronous threads in a different thread pool out of the box.
Read more about Executors.newFixedThreadPool() that creates a thread pool that reuses a fixed number of threads operating off a shared unbounded queue. At any point, at most nThreads threads will be active processing tasks. If additional tasks are submitted when all threads are active, they will wait in the queue until a thread is available.
Please have a look at ExecutorService to read more about it along with sample code.

Related

Reactor pattern how it works with threads

I started reading Vert.x framework documentation, but i didn't understand how it works and what is a Reactor pattern, i read this article https://dzone.com/articles/understanding-reactor-pattern-thread-based-and-eve and noticed that instead of general servlet based (one request one thread) approch, Reactor pattern uses event-driven architecture where single thread named event loop takes a request put it to some sort of the job queue and provides a handler that will be executed once the task has been finished , and code in handler will be executed by this event loop, so golden rule is - don't block event loop.
What I don't understand is , from article:
Those handlers/callbacks may utilize a thread pool in multi-core environments.
So handlers use thread pool , how this pool is difference from the standard thread pool for example Servlet's container TOMCAT. How these two concepts are different from each other in case of Http server if both are using Thread pool to manage requests.
Thank in advance
Forget that DZone article. Forget the Reactor pattern. Learn Asynchronous procedure call.
There are 2 ways to split all the work in computer to parts: threads and tasks (in Java- tasks are Runnables). Tasks execute on a thread pool when they are ready. And when they are not ready, they do not occupy thread with its huge stack, and we can afford to have millions of tasks in single JVM instance, while 10000 threads in single JVM instance is problematic.
The main problem with tasks is when task needs data which is not ready (not calculated by other task, or not yet arrived via network). In the thread world, the thread waiting for data executes a blocking operation like inputsream.read(), but tasks are not allowed to do this, or they would have occupied too many threads from thread pool and all advantages of task-based programming would be lost. So tasks are augmented with mechanisms which submit that task to the thread pool exactly when all their parameters arrived. Task with such a mechanism is called asynchronous procedure call. All the event-driven architectures are variants of asynchronous procedure call: Vert.x, RxJava, Project Reactor, Akka Actors etc. They just pretend to be something original and not always talk about this.

Java: Does making async calls increase number of threads?

So let's say I have a web app and for every request we spawn a new thread. Hundreds of requests come in, somewhere in the web server code we make synchronous calls to several services, we block and wait. This approach bloats the number of threads that we have as the sync calls create a bottleneck.
Supposedly, if we switch these calls to async requests we get rid of the bottleneck as the threads can continue and the callbacks will handle whatever needs to happen.
As far as I understand, in Java, in order to make an async call we spawn a new thread that makes the network call and contains the callback (I won't be implementing this, I'm assuming thats how some of the Java http libraries work).
So my question, how is this solving the problem of many threads? Async requests end up creating more threads (one for each request) and then go to sleep until something is returned, doesn't this create many sleeping threads?
The problem I am trying to solve is that at some point, when there's too many threads, the JVM explodes.
Specifically in web service / servlet environments:
In the simplest configuration, common web servers (Jetty, Tomcat) are configured with a fixed number of threads, or range of number of threads. If more requests arrive than there are threads, then those requests will pile up in the kernel connection queue. A thread accepts a connection and does all the work. When the response is sent, the thread is available for another connection. Adding your own thread pool or executor service won't help that.
In more complex configurations, the web container accepts connections on one pool of threads, and then dispatches the work on another, with a queue in between. Then, instead of blocking clients on connect, or having them fail to connect, they just wait.
In async Servlet processing, such as the JAX-RS #suspended AsyncResponse object, you get to control the details of this yourself. The servlet calls you with a data structure that includes the connection. Your code can put that object into some queue (possibly just the queue built into an Executor Service), and return. That frees the web server thread to accept another container. Your threads, probably from an Executor Service, work through the queue, processing requests and sending responses.
What you never do is create an unbounded number of threads.
Asynchronous means the request is processed by another thread. It doesn't have to be a dedicated thread, let alone a new thread.
For instance, consider JAX-RS asynchronous client callbacks:
target().path("http://example.com/resource/")
.request().async().get(new InvocationCallback<String>() {
#Override
public void completed(String dataFromBackendServer) {
respondWith(dataFromBackendServer);
}
#Override
public void failed(Throwable throwable) {
respondWithError(throwable);
}
});
Here, the InvocationCallback is executed in a thread provided by the JAX-RS implementation, that waits for a response to any pending backend request, then processes that response using the appropriate InvocationCallback. Because a single thread can wait for any number of pending backend requests, fewer threads are needed.
That said, synchronous processing is often easier to implement, and while it does not scale quite as well as asynchronous processing, it scales sufficiently for many applications. That is, unless you have thousands of concurrent requests, the plain old synchronous processing model will do.
There is no problem like too many threads, servers always have a thread pool. They assign a thread from the thread pool to each request, if a thread is not available server just makes the request socket wait in the queue of the ServerSocket.
The problem that async request processing in Servlet 3 is trying to solve is the less utilization of resources due to blocking of request processing threads.
So if there are long running requests which just wait for I/O, they are put on hold till the response is received from I/O channel, and that thread is assigned to another request waiting in the socket queue.
This provides us with better resource (CPU mainly) utilization, and more resource through put as more requests (ones with short duration) are served per second.

How sync and async request processing differs in Tomcat?

I cannot figure out what difference is between sync and async calls in Tomcat.
Everywhere I use NIO. I have thousand connections managed by few Tomcat threads. When long sync request incomes a thread borrows from Tomcat thread pool and processes the request. This thread waits for long process to be completed and then writes result to HTTPResponse. So resources are wasted just for awaiting. When long async request incomes then Tomcat thread creates separate thread and long process starts within this new thread and Tomcat thread returns to pool alomost immedately.
Am i understood right? If so I don't see any difference between sync and async modes because in both modes same amount of threads is used
The difference is "pull" versus "push". Yes, you are correct, either way a thread must be allocated for doing the work.
But with sync request you would have to create the worker thread manually and poll the task result from client, whereas with async the server will push the result to the client when the task completes.
The latter is slightly more efficient because your server doesn't have to process many poll requests per result.
Thanks, figured out. Sync request is a case when one thread borrows for one request and awaits and pulls neccessary data. Async request is a case when there is just one thread separated from requests which waits for data and pushes it to requests async contexts, i.e. client's output streams. When client produces aync request it doesn't creates any additional threads but its async context stands to subscribers list. When data appears then one thread walks through this list and writes data to every async context. Result is - sync request means one thread per request, async request means one (or little more) thread for many simultaneous requests

Asynchronous processing of Requests by Servlet

I came across the Asynchronous processing of requests by Servlets, as I was exploring how a NodeJS application and a Java application handles a request.
From what I have read in different places:
The request will be received and processed by a HTTP thread from the Servlet Container and in case of blocking operations (like I/O), the request can be handed over to another Threadpool and the HTTP thread which received the request can go back to receive and process the next request.
The time-consuming blocking operation will now be taken up by a worker from the Threadpool.
If what I had understood is correct, I have the following question:
Even the thread that processes the blocking operation is going to wait for that operation to complete and hence blocking the resources(and number of threads processed is equal to the number of cores), if I am right.
What exactly is the gain here by using of asynchronous processing?
If not, enlighten me please.
I can explain the benefits in terms of Node.js (equally applicable elsewhere).
The Problem. Blocking Network IO.
Suppose you want to create a connection with your server, in order to read from the
connection you will need a thread T1 which will read data over network for that connection,
this read method is blocking i.e your thread will wait indefinitely till there is any data to read. Now suppose you have another connection around that time, now to handle this connection you have to create another Thread T2. Its quite possible that this thread may again be blocked for reading data on the second connection, so it means you can handle as many connections as you can handle threads in your system. This is called a Thread Per Request Model. Creating lot of threads will degrade your system performance due to lot of context switching and scheduling. This model doesn't scale well.
Solution :
A little Background, there is a method in FreeBSD/Linux called as kqueue/epoll. Both of these methods accepts a list of socketfd(as function params), the calling thread gets blocked till one or more sockets have data ready to read, and these methods return a sublist of those ready connections. Ref. http://austingwalters.com/io-multiplexing/
Now Assuming you got a feeling for the above methods. Imagine there is Thread Called as EventLoop which calls the above method epoll/kqueue.
So in java your code will look something like this.
/*Called by Event Loop Thread*/
while(true) {
/**
* socketFD socket on which your server is listening
* returns connection which are ready
*/
List<Connection> readyConnections = epoll( socketFd );
/**
* Workers thread will read data from connection
* which would be very fast as data is already ready for read
* So they don't need to wait.
*/
submitToWorkerThreads(readyConnections);
/**
* callback methods are queued by worker threads with data
* event loop threads call this methods
* this is where the main bottleneck is in case of Node.js
* if your callback have any consuming task lets say loop
* of 1M then you Event loop will be busy and you can't
* accept new connection. In practice in memory computation
* are very fast as compared to network io.
*/
executeCallBackMethodsfromQueue();
}
So now you see the above method can accept many more connections than Thread per request model
also the worker threads are also not stuck as they will read only those connection which have data. When worker threads will read the whole data they will queue there response or data on a queue with a callback handler you provided at the time of listening. this callback method will again be executed By the Event Loop Thread.
The above approach has two disadvantages.
Is not able properly use all the cores of multiprocessor.
Long In memory computations will degrade the performance significantly.
First disadvantage can be taken care of Clustered Node.js i.e kind one node.js process corresponding to each core of the cpu.
Anyways Have a look at vert.x this is kind of similar node.js but in java.
Also explore Netty.
Yes, in this scenario blocking operation will execute in it's own thread and will be blocking some resources, but your HTTP thread is now free to process some other operations that might be not so time-consuming.
Your gain of asynchronous processing is ability to continue handling other requests while waiting heavyweight operation response instead of dumb blocking HTTP thread.

Should I manage my async tasks or let container to do so (servlets 3)

With request.startAsync() you get asyncContext that you can start(). From there, servlet container manages the lifecycle of passed Runnable, while the original thread finishes and it is returned to the pool.
Should I rely on servlet-container management of my Runnables OR it would be better to create (for example) a context-scoped Queue and to use eg fixed thread pool of executors to process created asyncContext (without actually starting them)?
This way I would have more control of the async work and configurable threads (i dunno if you can configure that on servlet container by the spec?)
It's my understanding that you need to deal with the resulting Runnables yourself, by passing them to an Executor / ExecutorService. This isn't something that the servlet container will handle for you.
By asking for an AsyncContext, you are essentially telling the servlet container not to retain a thread to handle this request, and you wrap the eventual response generation that would have been performed (in the old synchronous world) in a Runnable with the AsyncContext. At this point, it is up to you to see that the Runnable gets run, and the response generated. But how it gets executed is up to you: queuing, priority, thread pool size, etc.
I'd say something like a ThreadPoolExecutor with a Queue would work well. That way, if you want to start to reject requests once the current in-progress number reaches a certain size, you can choose how to do that yourself (service overloaded response, or similar).

Categories