Asynchronous Java: Help flatten my nested Mono - java

Setup:
public Mono<Mono<String>> getAsyncResult() { // should return Mono<String>
return Mono.fromSupplier(() -> {
if (stopEarly()) return Mono.just("STOPPED EARLY");
int a = doSyncJob1();
int b = doSyncJob2();
return doAsyncJob(a, b).map(string1 -> toString2(string1));
});
}
Right now the whole thing returns Mono<Mono<String>>. How to get it to return Mono<String> without blocking?
The reason it's all inside Mono.fromSupplier() is because I don't need the tasks to necessarily block and happen immediately, they can be scheduled to run asynchronously. Maybe one way is to flatten what's inside Mono.fromSupplier() but I'm not sure how to compose it.

Replace Mono.fromSupplier with Mono.defer
Also, if doSyncJob* blocks, then they will block the subscriber thread. Therefore, you might want to use .subscribeOn(Schedulers.elastic()) after .defer(...) to ensure the blocking work is executed in a Scheduler meant for blocking work.

Related

io.smallrye.mutiny.Uni multi threading doesn't work (Quarkus)

I try to perform 2 different operations with different threads each.Here is my code :
Uni.combine().all()
.unis(getItem(), getItemDetails())
.asTuple().subscribe().with(tuple -> {
context.setItem(tuple.getItem1());
context.setItemDetails(tuple.getItem2());
});
Methods :
public Uni<ItemResponse> callGetItem(){
Supplier<ItemResponse> supplier = () -> itemService.getItem("item_id_1");
return Uni.createFrom().item(supplier);
}
public Uni<ItemDetailsResponse> callGetItemDetail(){
Supplier<ItemDetailsResponse> supplier = () -> itemService.getItemDetail("dummy_item_id");
return Uni.createFrom().item(supplier) ;
}
But when i run the code both callGetItem() and callGetItemDetail() methods works in the same thread (executor-thread-0).
Where am i doing wrong?
Edit:
When i give an executor service Executors.newFixedThreadPool(2) for my Unis,
They still work in single thread. I mofified callGetItem() and callGetItemDetail() as :
public Uni<ItemResponse> callGetItem(){
Supplier<ItemResponse> supplier = () -> itemService.getItem("item_id_1");
return Uni.createFrom().item(supplier).emitOn(executor);
}
public Uni<ItemDetailsResponse> callGetItemDetail(){
Supplier<ItemDetailsResponse> supplier = () -> itemService.getItemDetail("dummy_item_id");
return Uni.createFrom().item(supplier).emitOn(executor) ;
}
executor is :
ExecutorService executor = Executors.newFixedThreadPool(2);
but they still works in same thread. Do you have any idea why it happens?
Since you are composing different Unis using Uni.combine().all().unis().asTuple(), the combined Uni will emit its result (combination) after the last element has emitted its item.
The last (upstream) Uni will have its item emitted (as is the case for other Unis as well) on whatever Thread that you have declaratively set it to emit on. Hence the combination Uni will follow execution on the same calling Thread.
As a result, if you are accessing the combined group values, you will be accessing these on the same execution carrier Thread.

what is the right approach to combine CompletableFuture and JdbcTamplate?

I am trying to make some simple queries to the database using JdbcTemplate. Will my approach be right?
#Async
public CompletableFuture<List<ResultClass1> query1() {
return CompletableFuture.completedFuture(jdbcTemplate.query("my sql",rowMap, paramter));
}
#Async
public CompletableFuture<List<ResultClass2> query2() {
return CompletableFuture.completedFuture(jdbcTemplate.query("my sql2",rowMap, paramter));
}
CompletableFuture<List<ResultClass1> future1 = dao1.query1();
CompletableFuture<List<ResultClass2> future2 = dao2.query2();
CompletableFuture.allOf(future1, future2).join();
I think you've got it, if what you're trying to do is run those queries simultaneously and wait for both to finish. However, your application is blocked until both finish. To avoid this, typically you do something like this:
CompletionStage<CombinedResult> combinedResult =
CompletableFuture.allOf(future1, future2).thenApplyAsync(dummy -> {
List<ResultClass1> query1Results = future1.join();
List<ResultClass2> query2Results = future2.join();
// more code...
return someCombinedResult;
});
This allows you to continue writing code around a hypothetical combined result, all the way up your call stack, so that you can even return to your main process's event loop, which frees your process to do other things until your queries complete.

defer thenApplyAsync execution

I have following scenario.
CompletableFuture<T> result = CompletableFuture.supplyAsync(task, executor);
result.thenRun(() -> {
...
});
// ....
// after some more code, based on some condition I attach the thenApply() to result.
if ( x == 1) {
result.thenApplyAsync(t -> {
return null;
});
}
The question is what if the CompletableFuture thread finishes the execution before the main thread reaches the thenApplyAsync ? does the CompletableFuture result shall attach itself to thenApply. i.e should callback be declared at the time of defining CompletableFuture.supplyAsync() itself ?
Also what is the order of execution ? thenRun() is always executed at last (after thenApply()) ?
Is there any drawback to use this strategy?
You seem to be missing an important point. When you chain a dependent function, you are not altering the future you’re invoking the chaining method on.
Instead, each of these methods returns a new completion stage representing the dependent action.
Since you are attaching two dependent actions to result, which represent the task passed to supplyAsync, there is no relationship between these two actions. They may run in an arbitrary order and even at the same time in different threads.
Since you are not storing the future returned by thenApplyAsync anywhere, the result of its evaluation would be lost anyway. Assuming that your function returns a result of the same type as T, you could use
if(x == 1) {
result = result.thenApplyAsync(t -> {
return null;
});
}
to replace the potentially completed future with the new future that only gets completed when the result of the specified function has been evaluated. The runnable registered at the original future via thenRun still does not depend on this new future. Note that thenApplyAsync without an executor will always use the default executor, regardless of which executor was used to complete the other future.
If you want to ensure that the Runnable has been successfully executed before any other stage, you can use
CompletableFuture<T> result = CompletableFuture.supplyAsync(task, executor);
CompletableFuture<Void> thenRun = result.thenRun(() -> {
//...
});
result = result.thenCombine(thenRun, (t,v) -> t);
An alternative would be
result = result.whenComplete((value, throwable) -> {
//...
});
but here, the code will be always executed even in the exceptional case (which includes cancellation). You would have to check whether throwable is null, if you want to execute the code only in the successful case.
If you want to ensure that the runnable runs after both actions, the simplest strategy would be to chain it after the if statement, when the final completion stage is defined:
if(x == 1) {
result = result.thenApplyAsync(t -> {
return null;
});
}
result.thenRun(() -> {
//...
});
If that is not an option, you would need an incomplete future which you can complete on either result:
CompletableFuture<T> result = CompletableFuture.supplyAsync(task, executor);
//...
CompletableFuture<T> finalStage = new CompletableFuture<>();
finalStage.thenRun(() -> {
//...
});
// ...
if(x == 1) {
result = result.thenApplyAsync(t -> {
return null;
});
}
result.whenComplete((v,t) -> {
if(t != null) finalStage.completeExceptionally(t); else finalStage.complete(v);
});
The finalStage initially has no defined way of completion, but we can still chain dependent actions. Once we know the actual future, we can chain a handler which will complete our finalStage with whatever result we have.
As a final note, the methods without …Async, like thenRun, provide the least control over the evaluation thread. They may get executed in whatever thread completed the future, like one of executor’s threads in your example, but also directly in the thread calling thenRun, and even less intuitive, in your original example, the runnable may get executed during the unrelated thenApplyAsync invocation.

CompletionStage: Return a CompletionStage within exceptionally block [duplicate]

I don't see an obvious way to handle an exception with an asynchronous result.
For example, if I want to retry an async operation, I would expect something like this:
CompletionStage<String> cf = askPong("cause error").handleAsync((x, t) -> {
if (t != null) {
return askPong("Ping");
} else {
return x;
}
});
Where askPong asks an actor:
public CompletionStage<String> askPong(String message){
Future sFuture = ask(actorRef, message, 1000);
final CompletionStage<String> cs = toJava(sFuture);
return cs;
}
However handleAsync doesn't do what you think it does - it runs the callbacks on another thread asynchronously. Returning a CompletionStage here is not correct.
Jeopardy question of the day: thenApply is to thenCompose as exceptionally is to what?
Is this what you are looking for?
askPong("cause error")
.handle( (pong, ex) -> ex == null
? CompletableFuture.completedFuture(pong)
: askPong("Ping")
).thenCompose(x -> x);
Also, do not use the ...Async methods unless you intend for the body of the supplied function to be executed asynchronously. So when you do something like
.handleAsync((x, t) -> {
if (t != null) {
return askPong("Ping");
} else {
return x;
})
You are asking for the if-then-else to be run in a separate thread. Since askPong returns a CompletableFuture, there's probably no reason to run it asynchronously.
Jeopardy question of the day: thenApply is to thenCompose as exceptionally is to what?
I know this was initially java-8, but, since java-12, the answer would be exceptionallyCompose:
exceptionallyCompose[Async](Function<Throwable,? extends CompletionStage<T>> fn [, Executor executor])
Returns a new CompletionStage that, when this stage completes exceptionally, is composed using the results of the supplied function applied to this stage's exception.
As the JavaDoc indicates, the default implementation is:
return handle((r, ex) -> (ex == null)
? this
: fn.apply(ex))
.thenCompose(Function.identity());
That is, using handle() to call the fallback, and thenCompose() to unwrap the resulting nested CompletableFuture<CompletableFuture<T>> – i.e., what you would have done in previous versions of Java (like in Misha’s answer), except you would have to replace this with completedFuture(r).
After a lot of frustration in trying to figure out the proper way of doing Scala's recoverWith in Java 8, I ended up just writing my own. I still don't know if this is the best approach, but I created something like:
public RecoveryChainAsync<T> recoverWith(Function<Throwable,
CompletableFuture<T>> fn);
With repeated calls to recoverWith, I queue up the functions inside the recovery chain and implement the recovery flow myself with "handle". RecoveryChainAsync.getCompletableFuture() then returns a representative CompletableFuture for the entire chain. Hope this helps.

Proper termination of a stuck Couchbase Observable

I'm trying to delete a batch of couchbase documents in rapid fashion according to some constraint (or update the document if the constraint isn't satisfied). Each deletion is dubbed a "parcel" according to my terminology.
When executing, I run into a very strange behavior - the thread in charge of this task starts working as expected for a few iterations (at best). After this "grace period", couchbase gets "stuck" and the Observable doesn't call any of its Subscriber's methods (onNext, onComplete, onError) within the defined period of 30 seconds.
When the latch timeout occurs (see implementation below), the method returns but the Observable keeps executing (I noticed that when it kept printing debug messages when stopped with a breakpoint outside the scope of this method).
I suspect couchbase is stuck because after a few seconds, many Observables are left in some kind of a "ghost" state - alive and reporting to their Subscriber, which in turn have nothing to do because the method in which they were created has already finished, eventually leading to java.lang.OutOfMemoryError: GC overhead limit exceeded.
I don't know if what I claim here makes sense, but I can't think of another reason for this behavior.
How should I properly terminate an Observable upon timeout? Should I? Any other way around?
public List<InfoParcel> upsertParcels(final Collection<InfoParcel> parcels) {
final CountDownLatch latch = new CountDownLatch(parcels.size());
final List<JsonDocument> docRetList = new LinkedList<JsonDocument>();
Observable<JsonDocument> obs = Observable
.from(parcels)
.flatMap(parcel ->
Observable.defer(() ->
{
return bucket.async().get(parcel.key).firstOrDefault(null);
})
.map(doc -> {
// In-memory manipulation of the document
return updateDocs(doc, parcel);
})
.flatMap(doc -> {
boolean shouldDelete = ... // Decide by inner logic
if (shouldDelete) {
if (doc.cas() == 0) {
return Observable.just(doc);
}
return bucket.async().remove(doc);
}
return (doc.cas() == 0 ? bucket.async().insert(doc) : bucket.async().replace(doc));
})
);
obs.subscribe(new Subscriber<JsonDocument>() {
#Override
public void onNext(JsonDocument doc) {
docRetList.add(doc);
latch.countDown();
}
#Override
public void onCompleted() {
// Due to a bug in RxJava, onError() / retryWhen() does not intercept exceptions thrown from within the map/flatMap methods.
// Therefore, we need to recalculate the "conflicted" parcels and send them for update again.
while(latch.getCount() > 0) {
latch.countDown();
}
}
#Override
public void onError(Throwable e) {
// Same reason as above
while (latch.getCount() > 0) {
latch.countDown();
}
}
};
);
latch.await(30, TimeUnit.SECONDS);
// Recalculating remaining failed parcels and returning them for another cycle of this method (there's a loop outside)
}
I think this is indeed due to the fact that using a countdown latch doesn't signal the source that the flow of data processing should stop.
You could use more of rxjava, by using toList().timeout(30, TimeUnit.SECONDS).toBlocking().single() instead of collecting in an (un synchronized and thus unsafe) external list and of using the countdownLatch.
This will block until a List of your documents is returned.
When you create your couchbase env in code, set computationPoolSize to something large. When the Couchbase clients runs out of threads using async it just stops working, and wont ever call the callback.

Categories