I try to perform 2 different operations with different threads each.Here is my code :
Uni.combine().all()
.unis(getItem(), getItemDetails())
.asTuple().subscribe().with(tuple -> {
context.setItem(tuple.getItem1());
context.setItemDetails(tuple.getItem2());
});
Methods :
public Uni<ItemResponse> callGetItem(){
Supplier<ItemResponse> supplier = () -> itemService.getItem("item_id_1");
return Uni.createFrom().item(supplier);
}
public Uni<ItemDetailsResponse> callGetItemDetail(){
Supplier<ItemDetailsResponse> supplier = () -> itemService.getItemDetail("dummy_item_id");
return Uni.createFrom().item(supplier) ;
}
But when i run the code both callGetItem() and callGetItemDetail() methods works in the same thread (executor-thread-0).
Where am i doing wrong?
Edit:
When i give an executor service Executors.newFixedThreadPool(2) for my Unis,
They still work in single thread. I mofified callGetItem() and callGetItemDetail() as :
public Uni<ItemResponse> callGetItem(){
Supplier<ItemResponse> supplier = () -> itemService.getItem("item_id_1");
return Uni.createFrom().item(supplier).emitOn(executor);
}
public Uni<ItemDetailsResponse> callGetItemDetail(){
Supplier<ItemDetailsResponse> supplier = () -> itemService.getItemDetail("dummy_item_id");
return Uni.createFrom().item(supplier).emitOn(executor) ;
}
executor is :
ExecutorService executor = Executors.newFixedThreadPool(2);
but they still works in same thread. Do you have any idea why it happens?
Since you are composing different Unis using Uni.combine().all().unis().asTuple(), the combined Uni will emit its result (combination) after the last element has emitted its item.
The last (upstream) Uni will have its item emitted (as is the case for other Unis as well) on whatever Thread that you have declaratively set it to emit on. Hence the combination Uni will follow execution on the same calling Thread.
As a result, if you are accessing the combined group values, you will be accessing these on the same execution carrier Thread.
Related
In my web application, I need to call around more than 10 methods in one API call. To make that efficient I use ExecutorService to create multiple threads at a same time. Each methods returning different Objects expect fav_a(), fav_b(), fav_c(). (Sample code is given below for easiness)
#GetMapping(RequestUrl.INIT)
public ResponseEntity<Map<String, List<?>>> init() throws ExecutionException, InterruptedException {
ExecutorService service = Executors.newFixedThreadPool(6);
Future<List<Object>> method_a = service.submit(() -> someService.method_a());
Future<List<Object>> method_b = service.submit(() -> someService.method_b());
Future<List<Object>> method_c = service.submit(() -> someService.method_c());
Future<List<FavouriteConverter>> fav_a = service.submit(() -> someService.fav_a());
Future<List<FavouriteConverter>> fav_b = service.submit(() -> someService.fav_b());
Future<List<FavouriteConverter>> fav_c = service.submit(() -> someService.fav_c());
service.shutdown();
List<FavouriteConverter> combinedFavourite = Stream.of(fav_a.get(), fav_b.get(), fav_c.get()).flatMap(f -> f.stream()).collect(Collectors.toList());
combinedFavourite=combinedFavourite.stream()
.sorted(Comparator.comparing(FavouriteConverter::get_id, Comparator.reverseOrder()))
.limit(25)
.collect(Collectors.toList());
Map<String, List<?>> map = new HashMap<>();
map.put("method_a", method_a.get());
map.put("method_b", method_b.get());
map.put("method_c", method_c.get());
map.put("favourite", combinedFavourite);
return new ResponseEntity<>(map, HttpStatus.OK);
}
First I need to get fav_a.get(), fav_b.get(), fav_c.get() to make combinedFavourite. If any of one delays, the logic will be wrong. Creating threads are expensive.
Does Stream automatically handle this kind of situation?
If fav_a(), fav_b(), fav_c() do it jobs earlier than other methods, How can I put combinedFavourite into another thread? This means how to make Future<List<FavouriteConverter>> combinedFavourite in waiting stage until fav_a.get(), fav_b.get(), fav_c.get() finishes. (Assume method_a(),method_b(),method_c() still running.)
No, Streams are not responsible for joining these threads.
Since you wait for the results of these 3 threads and putting them into a map which you return, wrapping such logic in a separate thread doesn't help you as long as you have to wait and return the result.
Use ExecutorService::invokeAll to execute all the tasks and returning a list of Futures when all are complete (when Future::done is true).
List<Future<List<Object>>> list = service.invokeAll(
Arrays.asList(
() -> someService.method_a(),
() -> someService.method_b(),
() -> someService.method_c()
));
Note these are guaranteed:
The result List<Future> is in the same order as the collection of tasks given (according to its Iterator).
All the tasks will run in a separate thread if the pooled number of threads are higher or equal than executed tasks (assuming there are no other tasks using a thread from the same thread pool).
This logics helps you to work with complete results.
I am calling one async method inside for loop and adding future of it into a list. I am not sure why allOff is not waiting to complete all futures and returning partial result. Have a look of my code.
I have one overridden method
#Overide
#Async
CompletableFuture<someType> fetchData()
{
returns new CompletableFuture<someType>();
}
I am calling above method in a for loop with different instances.
get all beans which implments one interface which has mehod fetchData.
Map<String, SomeClass> allBeans =context.getBeansOfType(SomeClass.class);
List<SomeClass> list=
allBeans.values().stream().collect(SomeClass.toList());
for (SomeClass classInstance: list) {
classInstance.fetchData().thenApply(x->{
//Some DB persistence Call
futureList.add(x);
});
}
after that I am applying allOff so that all future can be completed but it is not waiting for all and main thread excuting rest of flow.
CompletableFuture<Void> combinedFutures = CompletableFuture.allOf(
futureList.toArray(new CompletableFuture[futureList.size()]));
CompletableFuture<List<futureResponse>> finalList=
combinedFutures.thenApply(v -> {
return futureList.stream().map(m ->
m.join()).collect(Collectors.toList());
});
finalList- in this List I want all the completed futures returned by fetch
invocation.
In finalList I am always getting 2 objects but fetchData is getting run 5 times( based on number of instances), I saw the log after all of remaining async call are getting completed. Could someone help here.
Observation:- After putting main thread on sleep for 30 sec, I could see I have all 5 objects in the list. Could some one please tell why main thread is not waiting at allOff for all futures to complete.
IIUC, what you want to do can be done simpler by doing
CompletableFuture<List<FutureResponse>> = CompletableFuture.supplyAsync(() -> {
// start fetches and collect the individual futures
List<CompletableFuture> fetches =
allBeans.values().stream()
.map(SomeClass::fetchData)
.collect(toList());
// join all futures and return the list of the results
return fetches.stream()
.map(CompletableFuture::join)
.collect(toList());
}
I think you can't do it in a single stream (ie map to fetch, then immediately to join) because that might wait for the join before the next future is created.
You have a race condition :
classInstance.fetchData().thenApply(x->{
futureList.add(x);
});
This code means that only when the future x is completed, then x will be added to futureList. This might be in 10 milliseconds, or 2 hours, who knows ? (It might be never if the future fails exceptionnaly).
So, when the code reaches
CompletableFuture.allOf(futureList....
There is no "guarantee" that the thenApply have been called. futureList could even be empty.
One way you could correct this code is like so :
Map<String, SomeClass> allBeans =context.getBeansOfType(SomeClass.class);
List<SomeClass> list=
allBeans.values().stream().collect(SomeClass.toList());
for (SomeClass classInstance: list) {
futureList.add(classInstance.fetchData());
}
Or if you actually need to do something in a thenApply:
for (SomeClass classInstance: list) {
futureList.add(
classInstance.fetchData().thenApply(x -> whatever(x))
);
}
This way, your futureList is populated not when an async result returns (which is unknown, and might even fail with an exception an never occur), but as soon as the async call is created, which is what you actually want.
Setup:
public Mono<Mono<String>> getAsyncResult() { // should return Mono<String>
return Mono.fromSupplier(() -> {
if (stopEarly()) return Mono.just("STOPPED EARLY");
int a = doSyncJob1();
int b = doSyncJob2();
return doAsyncJob(a, b).map(string1 -> toString2(string1));
});
}
Right now the whole thing returns Mono<Mono<String>>. How to get it to return Mono<String> without blocking?
The reason it's all inside Mono.fromSupplier() is because I don't need the tasks to necessarily block and happen immediately, they can be scheduled to run asynchronously. Maybe one way is to flatten what's inside Mono.fromSupplier() but I'm not sure how to compose it.
Replace Mono.fromSupplier with Mono.defer
Also, if doSyncJob* blocks, then they will block the subscriber thread. Therefore, you might want to use .subscribeOn(Schedulers.elastic()) after .defer(...) to ensure the blocking work is executed in a Scheduler meant for blocking work.
I have following scenario.
CompletableFuture<T> result = CompletableFuture.supplyAsync(task, executor);
result.thenRun(() -> {
...
});
// ....
// after some more code, based on some condition I attach the thenApply() to result.
if ( x == 1) {
result.thenApplyAsync(t -> {
return null;
});
}
The question is what if the CompletableFuture thread finishes the execution before the main thread reaches the thenApplyAsync ? does the CompletableFuture result shall attach itself to thenApply. i.e should callback be declared at the time of defining CompletableFuture.supplyAsync() itself ?
Also what is the order of execution ? thenRun() is always executed at last (after thenApply()) ?
Is there any drawback to use this strategy?
You seem to be missing an important point. When you chain a dependent function, you are not altering the future you’re invoking the chaining method on.
Instead, each of these methods returns a new completion stage representing the dependent action.
Since you are attaching two dependent actions to result, which represent the task passed to supplyAsync, there is no relationship between these two actions. They may run in an arbitrary order and even at the same time in different threads.
Since you are not storing the future returned by thenApplyAsync anywhere, the result of its evaluation would be lost anyway. Assuming that your function returns a result of the same type as T, you could use
if(x == 1) {
result = result.thenApplyAsync(t -> {
return null;
});
}
to replace the potentially completed future with the new future that only gets completed when the result of the specified function has been evaluated. The runnable registered at the original future via thenRun still does not depend on this new future. Note that thenApplyAsync without an executor will always use the default executor, regardless of which executor was used to complete the other future.
If you want to ensure that the Runnable has been successfully executed before any other stage, you can use
CompletableFuture<T> result = CompletableFuture.supplyAsync(task, executor);
CompletableFuture<Void> thenRun = result.thenRun(() -> {
//...
});
result = result.thenCombine(thenRun, (t,v) -> t);
An alternative would be
result = result.whenComplete((value, throwable) -> {
//...
});
but here, the code will be always executed even in the exceptional case (which includes cancellation). You would have to check whether throwable is null, if you want to execute the code only in the successful case.
If you want to ensure that the runnable runs after both actions, the simplest strategy would be to chain it after the if statement, when the final completion stage is defined:
if(x == 1) {
result = result.thenApplyAsync(t -> {
return null;
});
}
result.thenRun(() -> {
//...
});
If that is not an option, you would need an incomplete future which you can complete on either result:
CompletableFuture<T> result = CompletableFuture.supplyAsync(task, executor);
//...
CompletableFuture<T> finalStage = new CompletableFuture<>();
finalStage.thenRun(() -> {
//...
});
// ...
if(x == 1) {
result = result.thenApplyAsync(t -> {
return null;
});
}
result.whenComplete((v,t) -> {
if(t != null) finalStage.completeExceptionally(t); else finalStage.complete(v);
});
The finalStage initially has no defined way of completion, but we can still chain dependent actions. Once we know the actual future, we can chain a handler which will complete our finalStage with whatever result we have.
As a final note, the methods without …Async, like thenRun, provide the least control over the evaluation thread. They may get executed in whatever thread completed the future, like one of executor’s threads in your example, but also directly in the thread calling thenRun, and even less intuitive, in your original example, the runnable may get executed during the unrelated thenApplyAsync invocation.
I have an async task represented by Futures executing in a separate threadpool that I want to join using RxJava. The "old" way of doing it using Java 5 constructs would be something like this (omitting collecting the results):
final Future<Response> future1 = wsClient.callAsync();
final Future<Response> future2 = wsClient.callAsync();
final Future<Response> future3 = wsClient.callAsync();
final Future<Response> future4 = wsClient.callAsync();
future1.get();
future2.get();
future3.get();
future4.get();
This would block my current thread until all futures are completed, but the calls would be in parallell and the whole operation would only take the time equal to the longest call.
I want to do the same using RxJava, but I'm a bit noob when it comes to how to model it correctly.
I've tried the following, and it seems to work:
Observable.from(Arrays.asList(1,2,3,4))
.flatMap(n -> Observable.from(wsClient.callAsync(), Schedulers.io()))
.toList()
.toBlocking()
.single();
The problem with this approach is that I introduce the Schedulers.io threadpool which causes unnecessary thread switching as I'm already blocking the current thread (using toBlocking()).
Is there any way I can model the Rx flow to execute the tasks in parallel, and block until all has been completed?
You should use zip function.
For example like this:
Observable.zip(
Observable.from(wsClient.callAsync(), Schedulers.io()),
Observable.from(wsClient.callAsync(), Schedulers.io()),
Observable.from(wsClient.callAsync(), Schedulers.io()),
Observable.from(wsClient.callAsync(), Schedulers.io()),
(response1, response2, response3, response4) -> {
// This is a zipping function...
// You'll end up here when you've got all responses
// Do what you want with them and return a combined result
// ...
return null; //combined result instead of null
})
.subscribe(combinedResult -> {
// Use the combined result
});
Observable.zip can also work with an Iterable so you can wrap your Observable.from(wsClient.callAsync(), Schedulers.io()); around with one (that returns 4 of those).