Spring Async Task returning Future - java

I am developing a Service that calls multiple external services that are independent of each other. I collate the responses of all these services and return it as a consolidated response. Since these are not interdependent , I am using Spring's #Async capability to perform all these activities in parallel. I am following the example provided in this link
https://spring.io/guides/gs/async-method/
Here , a while loop is used to wait until all the responses are obtained -
while (!(page1.isDone() && page2.isDone() && page3.isDone())) {
Thread.sleep(10); //10-millisecond pause between each check
}
I know this a sample code which was aimed at explaining the concept, which it does effectively. However in an enterprise application , can a while loop be used similar to what is shown above or should a different approach be adopted? If a different approach has to be adopted what is the advantage of the approach over using a while loop?

Couldn't you just use Future.get()? It's a blocking call. It'll make sure to wait until the result is ready. You can do something like:
List<Future<?>> results = Lists.newArrayList();
results.add(page1.get());
results.add(page2.get());
results.add(page3.get());

Related

How to incorporate async calls into a reactive pipeline

I've just discovered the joys of RxJava and its 10000 public methods, but I am struggling to understand how (and if) we should incorporate async apis into reactive pipelines.
To give an example, let's say I'm building a pipeline that:
takes keys from some cold source (or hot, in which case let's say we already have a way of dealing with an overactive source)
fetches data for those keys using an asynchronous client (or just applies any kind of async processing)
batches the data and
saves it into storage.
If we had a blocking api for step #2, it might look something like this.
source.map((key) -> client.callBlocking(key))
.buffer(500, TimeUnit.MILLISECONDS, 100)
.subscribe(dataList -> storage.batchSave(dataList));
With a couple more calls, we could parallelise this, making it so that 100 threads are waiting on client.callBlocking at any given time.
But what if the api we have is already asynchronous and we want to make use of that? I imagine the same pipeline would look something like this
source.magicMethod(new Processor() {
// When downstream requests more items
public void request(int count) {
upstream.request(count);
}
// When upstream delivers us an item
public void onNext(Object key) {
client.callAsync(key)
.onResult((data) -> downstream.onNext(data));
}
})
.buffer(500, TimeUnit.MILLISECONDS, 100)
.subscribe(data -> storage.batchSave(data));
What I want to know is which method is magicMethod. Or perhaps this is a terrible idea to incorporate async calls into a pipeline and we should never ever. (There is also a question of pre-fetching, so that downstream code does not necessarily have to wait for data after requesting it, but let's put that aside for now)
Note that this is not a question about parallelism. The second version could run perfectly well in a single thread (plus whatever threads the client may or may not be using under the hood)
Also, while the question is about RxJava, I'd be just as happy to see an answer using Reactor.
Thanks for helping a poor old reactive noob :)

Synchronous and asynchronous call in Java and spring

I need to update 3 data regions by making a synchronous call to one data region and asynchronous call to other 2 data regions using Java n spring.which is the best way to implement this?
Apart from the term region I understood you want to make a few http requests. One of them (the first) has to be blocking.
I would suggest you to have a look into spring's WebClient which let you make multiple requests in parallel.
The first (blocking) one can be achieved by a blocking Mono.
Here you can find a tutorial on Simultaneous Spring WebClient Calls:
https://www.baeldung.com/spring-webclient-simultaneous-calls
Cheers

Spring Async and timeshares

I'm trying to create an architecture using Java Spring which will have several background processes which will be running concurrently, listening and pulling information as it arrives from different ZMQ sockets.
I'm not sure the best way to do this. Right now, i'm using the #Async annotation with a TaskPoolExecutor, but the #Async function seems to be blocking the next function call in the stack.
So my questions are
1) Will an #Async function block the next function call in the stack? Or will it fire off that function in a new thread, and continue executing the functions in the current thread.
2) Is there any way to give each Thread an equal timeslice of computing power.
3) Are there any better ways to do this?
Thanks!
#Async will run the annotated method asynchronously using the
specified executor.
There is no way to control OS resources
dedicated to threads.
Java has a very convenient
CompletableFuture API for asynchronous computations. I've
recently wrote a blog post about the problems with #Async and how
they can be solved with CompletableFuture: Demystifying the Magic
of Spring: #Async .

How to improve the performance of a REST call which internally other REST calls

I am creating an endpoint which retrieves me some data and in this call it calls 3 different REST calls and due to this it hampers the performance of my application.
My Endpoint Code:
1. REST call to get the userApps()
2. iterate over userAPPs
2.1 make REST call to get the appDetails
2.2 make use of above response to call the 3rd REST call which returns list.
2.3 iterate over the above list and filter out the required fields and put it in main response object.
3.return response
So, this much complexity hampers the performance.
I have tried to add the multithreading concept but the time taken by normal code and multi threading is almost same.
Condition is like, We can not modify the 3 external REST calls to support the pagination.
We can not add the pagination because we don't have any database.
Is there any solution?
You shouldn't add threading, you should remove threads altogether. I.e. you should make all your code non-blocking. This just means that all the work will basically be done in the http-client's threadpool, and all the waiting can be done in the operating system's selector (which we want).
Here is some code how this core logic would work, assuming your http calls return CompletableFuture.
public CompletableFuture<List<Something>> retrieveSomethings() {
return retrieveUserApps().thenCompose(this::retriveAllAppsSomethings);
}
public CompletableFuture<List<Something>> retrieveAllAppsSomethings(List<UserApp> apps) {
return CompletableFuture.allOf(
apps.stream().map(this::retriveAppSomethings).toArray(CompletableFuture[]::new))
.apply(listOfLists -> listOfLists.stream().flatMap(List::stream).collect(Collectors.toList()))
.apply(this::filterSomethings);
}
public CompletableFuture<List<Something>> retreiveAppSomethings(UserApp app) {
return retrieveAppDetails(app).thenCompose(this::retreiveAppDetailSomethings);
}
All this does, is to make everything non-blocking, so everything that can be run in parallel will run in parallel. There is no need to limit anything, since everything will be run in the http-client's threadpool, which is most likely limited. It doesn't matter anyway, because waiting will not take up a thread.
All you have to do for the above code is to implement retrieveUserApps(), retrieveAppDetails(app) and retrieveAppDetailSometings(appDetail). All of which should return a CompletableFuture<> and be implemented with the async-enabled version of your http client.
This will make retrieving data for 1 app or 100 apps the same, since all of those will run in parallel (assuming they all take the same time and the downstream systems can handle this many parallel requests).

Run Async task, before return flux db entities

I have Flux<URL>. How can I make multiple concurrent void requests for each URL (for example myWebClient.refresh(URL)), then (after all requests done) read data from the database and return Flux<MyAnyEntity> (for example repo.findAll())?
You can achieve that using Flux/Mono operators:
// get the URIs from somewhere
Flux<URI> uris = //...
Flux<MyAnyEntity> entities = uris
// map each URI to a HTTP client call and do nothing with the response
.flatMap(uri -> webClient.get().uri(uri).exchange().then())
// chain that call with a call on your repository
.thenMany(repo.findAll());
Update:
This code is naturally asynchronous, non-blocking so all operations in the flatMap operator will be executed concurrently, according to the demand communicated by the consumer (this is the backpressure we're talking about).
If the Reactive Streams Subscriber request(N) elements, then N requests might be executed concurrently. I don't think this is not something you want to deal with directly, although you can influence things using windowing operators for micro-bacthing operations.
Using .subscribeOn(Schedulers.parallel()) will not improve concurrency in this case - as stated in the reference documentation you should only use that for CPU-bound work.

Categories