I'm trying to understand Spring WebFlux. The things I've found so far are reactive at the core, no Servlet API, no thread per request, HTTP 2, server pushes, application/stream+json.
But what is the difference between asynchronous calls in Spring MVC? I mean in Spring MVC when you return Future, DefferedResult and etc you get logic in the request handler (controller method) executed in a separate thread, so you can benefit from saving thread pool resources for dispatching requests as well.
So could you please highlight differences related to that? Why WebFlux is better here?
Thank you for your time very much!
The Servlet async model introduces an async boundary between the container threads (1 Servlet request/thread model) and the processing of the request in your application. Processing can happen on a different thread or wait. In the end, you have to dispatch back to a container thread and read/write in a blocking way (InputStream and OutputStream are inherently blocking APIs).
With that model, you need many threads to achieve concurrency (because many of those can be blocked waiting for I/O). This costs resources and it can be a tradeoff, depending on your use case.
With non-blocking code, you only need a few threads to process a lot of requests concurrently. This is a different concurrency model; like any model, there are benefits and tradeoffs coming with it.
For more information about that comparison, this Servlet vs. Reactive stacks talk should be of interest.
Servlet API is blocking I/O which requires 1 thread per HTTP request. Spring MVC async relies on Servlet APIs which only provides async behavior between container threads and request processing threads but not end to end.
Spring WebFlux on the other hand achieves concurrency by a fixed number of threads by using HTTP sockets and pushing chunks of data at a time through the sockets. This mechanism is called event loop, an idea made popular by Node.js. Such an approach is scalable and resilient. Spring 5's spring-webflux uses the event loop approach to provide async behavior.
More can be read from
Servlet vs. Reactive
Spring Boot performance battle
Comparing WebFlux with Spring Web MVC
Related
I'm new at Spring and I'm reading one book "Pro Spring boot 2". It says here that Spring Web MVC has some blocking on each request, and Spring Webflux is a completely non-blocking stack.
Tell me, please, what is meant?
The request that came to Spring MVC activates one thread to execute this request. When and why is it blocked?
And why doesn't Spring WebFlux block thread?
Spring Web MVC takes a single thread to handle each request to your API. Spring Webflux does not block a thread to handle each request, because no thread is kept waiting for something to be done (e.g. waiting for an answer from a database).
As written in 1., it can be blocked while waiting for an answer from a database or from another service that is called via HTTP.
Spring Webflux takes advantage of the reactive stack (take a look at https://projectreactor.io/) which is fully non-blocking. This means that no thread is blocked waiting for something to happen. Everything is based on reactive streams publishers (Mono and Flux) making your code reactive to data being available (from a database or from another service called via HTTP as examples).
For frameworks like Node.js and ASP.NET Core, they are capable of processing requests asynchronously for I/O tasks without creating additional threads. Are java servlet containers also capable of doing this? If not, do java servlet containers wait I/O tasks in the thread until the request is fully processed?
Okay, I found the answer myself.
According to the Jakarta EE 9 documentation,:
There are two common scenarios in which a thread associated with a request can be sitting idle.
The thread needs to wait for a resource to become available or process data before building the response. For example, an application may need to query a database or access data from a remote web service before generating the response.
The thread needs to wait for an event before generating the response. For example, an application may have to wait for a Jakarta Messaging message, new information from another client, or new data available in a queue before generating the response.
These scenarios represent blocking operations that limit the scalability of web applications. Asynchronous processing refers to assigning these blocking operations to a new thread and retuning the thread associated with the request immediately to the container.
So, java servlet containers are capable of asynchronous processing. However they will create new thread for both I/O and CPU bond tasks, which is not the same model as Node.js and ASP.NET Core.
We have a flow which we would like to implement with Reactive programming using Spring Boot 2 WebFlux. Currently we have no experience with Reactive programming.
As part of this flow we are going to create on or more HTTP requests (I guess using WebClient) and also read some data from DB.
We are considering to use AWS DynamoDB but as far as I understand the Java SDK does not support reactive API. This read will be a blocking I/O operation, my question is whether there is a benefit for implementing part of this flow with WebFlux? More generally, does a single blocking I/O operation in the flow eliminates all the benefit that we get from implementing with reactive programming?
Based on your question reactive is the idle way to deal with blocking operation especially IO (network, file and etc...)
you can use a library that implements this api in a reactive way or wrap a blocking request with a reactive api, this usually done by placing the blocking op on anther thread pool
in spring webflux you can achieve something similar like
#GetMapping
public Mono<Response> getResponse() {
return Mono.fromCallable(() -> blockingOp())
.publishOn(Schedulers.elastic());
}
publishOn in that case will cause all this flow to happen on another thread, you can choose dedicated thread pool as your choice
from the docs, elastic is a
Scheduler that dynamically creates ExecutorService-based Workers and caches the thread pools, reusing them once the Workers have been shut down.
The following may not answer your question fully, but might be a little helpful. There is a question mentioned in the FAQ for the Spring Framework 5, which is,
What if there is no reactive library for my database?
The answer to this is:
One suggestion for handling a mix of blocking and non-blocking code
would be to use the power of a microservice boundary to separate the
blocking backend datastore code from the non blocking front-end API.
Alternatively, you may also go with a worker thread pool for blocking
operations, keeping the main event loop non-blocking that way.
I think someone from Pivotal might be the right person to give more insights on this.
Currently experimenting reactive programming with Spring 5.0.0.RC2, Reactor 3.1.0.M2 and Spring Boot 2.0.0.M2.
Wondering about the concurrency and threading model used by WebFlux and Reactor to properly code the application and handle the mutable state.
The Reactor doc states that the library is considered concurrency agnostic and mentions the Scheduler abstraction. The WebFlux doc does not give information.
Yet when using WebFlux through Spring Boot, a threading model is defined.
From my experimentations here is what I got:
The model is neither 1 event thread, nor 1 event thread + workers
Several thread pools are used
"reactor-http-nio-3" threads: probably one per core, handle the incoming HTTP requests
"Thread-7" threads: used by async requests to MongoDB or HTTP resources
"parallel-1" threads: one per core, created by Schedulers.parallel() from Reactor, used by delay operators and such
Shared mutable state must be synchronized by the application
ThreadLocal (for application state, MDC logging, etc) are not request scoped, so are not very interesting
Is this correct ? What is the concurrency and threading model of WebFlux: for example what are the default thread pools?
Thank you for the information
After the question, the present documentation does provide some clues about the concurrency model and the threads one could expect (but I still think that clearer/better descriptions of what happens under-the-scene from a multi-threading perspective would be highly appreciated by Spring newcomers).
It discusses the difference between Spring MVC and Spring WebFlux (1-thread-per-request model vs. event-loop):
In Spring MVC, and servlet applications in general, it is assumed that applications may block the current thread, e.g. for remote calls, and for this reason servlet containers use a large thread pool, to absorb potential blocking during request handling.
In Spring WebFlux, and non-blocking servers in general, it is assumed that applications will not block, and therefore non-blocking servers use a small, fixed-size thread pool (event loop workers) to handle requests.
Invoking a Blocking API
But notice that Spring MVC apps can also introduce some asynchronicity (cf., Servlet 3 Async). And I suggest this presentation for a discussion wrt Servlet 3.1 NIO and WebFlux.
Back to the docs: it also suggests that, when working with reactive streams, you have some control:
What if you do need to use a blocking library?
Both Reactor and RxJava provide the publishOn operator to continue
processing on a different thread.
(For more details on this, refer to scheduling in Reactor)
It also discusses the threads you may expect in WebFlux applications (bold is mine):
Threading Model
What threads should you expect to see on a server running with Spring WebFlux?
On a "vanilla" Spring WebFlux server (e.g. no data access, nor other optional dependencies), you can expect one thread for the server, and several others for request processing (typically as many as the number of CPU cores). Servlet containers, however, may start with more threads (e.g. 10 on Tomcat), in support of both servlet, blocking I/O and servlet 3.1, non-blocking I/O usage.
The reactive WebClient operates in event loop style. So you’ll see a small, fixed number of processing threads related to that, e.g. "reactor-http-nio-" with the Reactor Netty connector. However if Reactor Netty is used for both client and server, the two will share event loop resources by default.
Reactor and RxJava provide thread pool abstractions, called Schedulers, to use with the publishOn operator that is used to switch processing to a different thread pool. The schedulers have names that suggest a specific concurrency strategy, e.g. "parallel" for CPU-bound work with a limited number of threads, or "elastic" for I/O-bound work with a large number of threads. If you see such threads it means some code is using a specific thread pool Scheduler strategy.
Data access libraries and other 3rd party dependencies may also create and use threads of their own.
In part, you can configure the details of the threading model via configuration
To configure the threading model for a server, you’ll need to use server-specific config APIs, or if using Spring Boot, check the Spring Boot configuration options for each server. The WebClient can be configured directly. For all other libraries, refer to their respective documentation.
Moreover, as e.g. the discussion Default number of threads in Spring boot 2.0 reactive webflux configuration
highlights,
The default number of threads for request handling is determined by the underlying web server; by default, Spring Boot 2.0 is using Reactor Netty, which is using Netty's defaults
it is a matter of default components and their defaults (and overall configuration, including that injected transparently through annotations) -- which may also change across versions of Spring/Boot and corresponding dependencies.
Said that, your guesses seem correct.
I'd like for incoming Java servlet web requests to invoke RabbitMQ using the RPC approach as described here.
However, I'm not sure how to properly reuse callback queues between requests, as per the RabbitMQ tutorial linked above creating a new callback queue per every request is inefficient (RabbitMQ may not cope even if using the Queue TTL feature).
There would generally be only 1-2 RPC calls per every servlet request, but obviously a lot of servlet requests per second.
I don't think I can share the callback queues between threads, so I'd want at least one per each web worker thread.
My first idea was to store the callback queue in a ThreadLocal, but that can lead to memory leaks.
My second idea was to store them in a session, but I am not sure they will serialize properly and my sessions are currently not replicated/shared between web servers, so it is IMHO not a good solution.
My infrastructure is Tomcat / Guice / Stripes Framework.
Any ideas what the most robust/simple solution is?
Am I missing something in this whole approach, and thus over-complicating things?
Note 1- This question relates to the overall business case described here - see option 1.
Note 2 - There is a seemingly related question How to setup RabbitMQ RPC in a web context, but it is mostly concerned with proper shutdown of threads created by the RabbitMQ client.