I'm learning Rx-Java2 with Vert.x and I would like to chain a success configuration retrieving with some parallel tasks.
I've created a method that search for the configuration and returns a Single subscribe to it and it worked fine.
But I'm in doubt where and how do call the subsequent tasks:
public void start(Future<Void> startFuture) throws Exception {
Single<JsonObject> configSingle = prepareConfigurationAsync();
configSingle.subscribe(onSuccess -> {
System.out.println(onSuccess);
--> Single<Boolean> task1 = prepareLongAsyncTask1(onSuccess).subscribe(...);
--> Completable task2 = prepareLongAsyncTask2(onSuccess)..subscribe(...);
}, onError -> {
startFuture.fail(onError);
}));
The way I did seems to be working, but without parallelism. how could I achieve it ?
How and where should I dispose those subscriptions ?
Continuing with some other source is usually done via flatMap. Doing things in parallel is often done with zip or merge. In your case, I don't think you need the value of the inner Single as part of the output so you can try this:
Completable config = prepareConfigurationAsync()
.flatMapCompletable(success ->
System.out.println(success);
return Completable.mergeArray (
prepareLongAsyncTask1(success)
.doOnSuccess(innerSuccess -> /* ... */)
.toCompletable(),
prepareLongAsyncTask2(success)
.doOnComplete(() -> /* ... */)
)
);
config
.subscribe( () -> /* completed */, error -> /* error'd */);
Related
I try to perform 2 different operations with different threads each.Here is my code :
Uni.combine().all()
.unis(getItem(), getItemDetails())
.asTuple().subscribe().with(tuple -> {
context.setItem(tuple.getItem1());
context.setItemDetails(tuple.getItem2());
});
Methods :
public Uni<ItemResponse> callGetItem(){
Supplier<ItemResponse> supplier = () -> itemService.getItem("item_id_1");
return Uni.createFrom().item(supplier);
}
public Uni<ItemDetailsResponse> callGetItemDetail(){
Supplier<ItemDetailsResponse> supplier = () -> itemService.getItemDetail("dummy_item_id");
return Uni.createFrom().item(supplier) ;
}
But when i run the code both callGetItem() and callGetItemDetail() methods works in the same thread (executor-thread-0).
Where am i doing wrong?
Edit:
When i give an executor service Executors.newFixedThreadPool(2) for my Unis,
They still work in single thread. I mofified callGetItem() and callGetItemDetail() as :
public Uni<ItemResponse> callGetItem(){
Supplier<ItemResponse> supplier = () -> itemService.getItem("item_id_1");
return Uni.createFrom().item(supplier).emitOn(executor);
}
public Uni<ItemDetailsResponse> callGetItemDetail(){
Supplier<ItemDetailsResponse> supplier = () -> itemService.getItemDetail("dummy_item_id");
return Uni.createFrom().item(supplier).emitOn(executor) ;
}
executor is :
ExecutorService executor = Executors.newFixedThreadPool(2);
but they still works in same thread. Do you have any idea why it happens?
Since you are composing different Unis using Uni.combine().all().unis().asTuple(), the combined Uni will emit its result (combination) after the last element has emitted its item.
The last (upstream) Uni will have its item emitted (as is the case for other Unis as well) on whatever Thread that you have declaratively set it to emit on. Hence the combination Uni will follow execution on the same calling Thread.
As a result, if you are accessing the combined group values, you will be accessing these on the same execution carrier Thread.
I want to execute several blocking methods (network calls, computation tasks). I want to execute them in parallel and be notified when ALL of them complete or receive an error if ANY of them fails (throws an exception). They do not emit results so Observable.zip() is not going to help me.
So far I have:
Completable a = computationTaskA();
Completable b = computationTaskB();
Completable c = computationTaskC();
Completable all = Completable.concat(Arrays.asList(a, b, c))
.subscribe(() -> {
// all succeed
}, e -> {
// any fails
});
However Completable.concat() docs say Returns a Completable which completes only when all sources complete, one after another.. I do not find a solution that would execute them in parallel.
You probably want to use Completable.merge/mergeArray
Completable a = computationTaskA();
Completable b = computationTaskB();
Completable c = computationTaskC();
Completable all = Completable.mergeArray(a, b, c);
all.subscribe(
() -> { /* success all around! */ },
e -> { /* at least one failure :( */ }
);
Hello I would like to know how to call two or more web services or Rest services in pararelo and compose a response of the calls.
I have found some examples on the web using other technologies but I can not get it to work with a reactor
// start task A asynchronously
CompletableFuture<ResponseA> futureA = asyncServiceA.someMethod(someParam);
// start task B asynchronously
CompletableFuture<ResponseB> futureB = asyncServiceB.someMethod(someParam);
CompletableFuture<String> combinedFuture = futureA
.thenCombine(futureB, (a, b) -> a.toString() + b.toString());
// wait till both A and B complete
String finalValue = combinedFuture.join();
////////////////////////////////////////////////////////////////////////////////
static void Run()
{
//Follow steps at this link for addding a reference to the necessary .NET library:
//http://stackoverflow.com/questions/9611316/system-net-http-missing-from-
//namespace-using-net-4-5
//Create an HTTP Client
var client = new HttpClient();
//Call first service
var task1 = client.GetAsync("http://www.cnn.com");
//Call second service
var task2 = client.GetAsync("http://www.google.com");
//Create list of all returned async tasks
var allTasks = new List<Task<HttpResponseMessage>> { task1, task2 };
//Wait for all calls to return before proceeding
Task.WaitAll(allTasks.ToArray());
}
Let's imagine you need to hit 2 services, so you nee 2 base WebClient (each is configured with the correct base URL and eg. an authentication scheme):
#Bean
public WebClient serviceAClient(String authToken) {
return WebClient.builder()
.baseUrl("http://serviceA.com/api/v2/")
.defaultHeader(HttpHeaders.AUTHORIZATION, "Basic " + authToken)
.defaultHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)
.build();
}
#Bean
public WebClient serviceBClient(String authToken): WebClient {
return WebClient.builder()
.baseUrl("https://api.serviceB.com/")
.defaultHeader(HttpHeaders.AUTHORIZATION, "token " + authToken)
.defaultHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)
.build();
}
From there, let's assume you get these 2 webclients injected in your controller (as qualified beans). Here's the code to make a joint call to both using Reactor:
Mono<ResponseA> respA = webclientA.get()
.uri("/sub/path/" + foo)
.retrieve()
.bodyToMono(ResponseA.class);
Mono<ResponseB> respB = webclientB.get()
.uri("/path/for/b")
.retrieve()
.bodyToMono(ResponseB.class);
Mono<String> join = respA.zipWith(respB, (a, b) -> a.toString + b.toString);
return join;
Note the zip function could produce something more meaningful like a business object out of the 2 responses. The resulting Mono<String> only triggers the 2 requests if something subscribes to it (in the case of Spring WebFlux, the framework will do that if you return it from a controller method).
If you´re using Spring reactor what you need is the operator Zip, to run your process and zip them once are finished.
/**
* Zip operator execute the N number of Flux independently, and once all them are finished, results
* are combined in TupleN object.
*/
#Test
public void zip() {
Flux<String> flux1 = Flux.just("hello ");
Flux<String> flux2 = Flux.just("reactive");
Flux<String> flux3 = Flux.just(" world");
Flux.zip(flux1, flux2, flux3)
.map(tuple3 -> tuple3.getT1().concat(tuple3.getT2()).concat(tuple3.getT3()))
.map(String::toUpperCase)
.subscribe(value -> System.out.println("zip result:" + value));
}
You can see more about reactive technology here https://github.com/politrons/reactive
If you already have a synchronous implementation you can easily add some reactor features to make it run in parallel thanks to Mono.fromCallable() method.
Mono<ResponseA> responseA = Mono
.fromCallable(() -> blockingserviceA.getResponseA())
.subscribeOn(Schedulers.elastic()); // will execute on a separate thread when called
Mono<ResponseB> responseB = Mono
.fromCallable(() -> blockingserviceB.getResponseB())
.subscribeOn(Schedulers.elastic());
// At that point nothing has been called yet, responseA and responseB are empty Mono
AggregatedStuff aggregatedStuff = Mono.zip(responseA, responseB) // zip as many Mono as you want
.flatMap(r -> doStuff(r.getT1(), r.getT2())) // do whatever needed with the results
.block(); // execute the async calls, and then execute flatMap transformation
The important thing to note between fromCallable() and just(), is that just() will execute directly and in the main thread, but fromCallable() is lazy meaning it will only be executed when needed, e.g: when you call block(), collect() (for Flux), ...etc...
Mono<ResponseA> responseA = Mono
.just(blockingserviceA.getResponseA()) // execute directly
Mono<ResponseB> responseB = Mono
.just(blockingserviceB.getResponseB()) // execute directly
// Above code have been executed sequentially, you can access response result
// => yes it is kind of useless, and yes it is exactly how I did it the first time!
So avoid to use just() for heavy tasks that you want to run in parallel. Using just() for instantiation is completely correct since you would not want to create a new thread, and get the overhead that comes with it, every time you instantiate a String or any other object.
PS: As Simon Baslé pointed out you can use WebClient to directly return Mono and Flux and do async calls, but if you already have your api clients implemented and don't have the option of refactoring the entire application, fromCallable() is a simple way to setup asynchronous process without refactoring to much code.
I'm having a very specific problem or misunderstanding with rxjava that someone hopefully can help with.
I'm running rxjava 2.1.5 and have the following code snippet:
public static void main(String[] args) {
final Observable<Object> observable = Observable.create(emitter -> {
// Code ...
});
observable.subscribeOn(Schedulers.io())
.retryWhen(error -> {
System.out.println("retryWhen");
return error.retry();
}).subscribe(next -> System.out.println("subscribeNext"),
error -> System.out.println("subscribeError"));
}
After executing this, the program prints:
retryWhen
Process finished with exit code 0
My question, and what I don't understand is: Why is retryWhen called instantly upon subscribing to an Observable? The observable does nothing.
What I want is retryWhen to be called when onError is called on the emitter. Am I misunderstanding how rx works?
Thanks!
Adding new snippet:
public static void main(String[] args) throws InterruptedException {
final Observable<Object> observable = Observable.create(emitter -> {
emitter.onNext("next");
emitter.onComplete();
});
final CountDownLatch latch = new CountDownLatch(1);
observable.subscribeOn(Schedulers.io())
.doOnError(error -> System.out.println("doOnError: " + error.getMessage()))
.retryWhen(error -> {
System.out.println("retryWhen: " + error.toString());
return error.retry();
}).subscribe(next -> System.out.println("subscribeNext"),
error -> System.out.println("subscribeError"),
() -> latch.countDown());
latch.await();
}
Emitter onNext and complete is called. DoOnError is never called. Output is:
retryWhen: io.reactivex.subjects.SerializedSubject#35fb3008
subscribeNext
Process finished with exit code 0
retryWhen calls the provided function when an Observer subscribes to it so you have a main sequence accompanied by a sequence that emits the Throwable the main sequence failed with. You should compose a logic onto the Observable you get in this Function so at the end, one Throwable will result in a value on the other end.
Observable.error(new IOException())
.retryWhen(e -> {
System.out.println("Setting up retryWhen");
int[] count = { 0 };
return e
.takeWhile(v -> ++count[0] < 3)
.doOnNext(v -> { System.out.println("Retrying"); });
})
.subscribe(System.out::println, Throwable::printStackTrace);
Since the e -> { } function body is executed for each individual subscriber, you can have a per subscriber state such as retry counter safely.
Using e -> e.retry() has no effect because the input error flow never gets its onError called.
One issue is, that you don't receive any more results because you'r creating a Thread using retryWhen() but your app seems to finish. To see that behaviour you may want to have a while loop to keep your app running.
That actually means that you need to add something like that to the end of your code:
while (true) {}
Another issue is that you dont emit any error in your sample. You need to emit at least one value to call onNext() else it wont repeat because it's waiting for it.
Here's a working example which a value, then it emits an error and repeat. you can use
.retryWhen(errors -> errors)
which is the same as
.retryWhen(errors -> errors.retry())
Working sample:
public static void main(String[] args) {
Observable
.create(e -> {
e.onNext("test");
e.onError(new Throwable("test"));
})
.retryWhen(errors -> errors.retry())
.subscribeOn(Schedulers.io())
.subscribe(
next -> System.out.println("subscribeNext"),
error -> System.out.println("subscribeError"),
() -> System.out.println("onCompleted")
);
while (true) {
}
}
The reason why you need to emit a result is, that Observable needs to emit a value, else it wait until it receives a new one.
This is because onError can only be called onec (in subscribe), but onNext emits 1..* values.
You can check this behaviour by using doOnError() which provides you the error everytime it retrys the Observable.
Observable
.create(e -> e.onError(new Exception("empty")))
.doOnError(e -> System.out.println("error received " + e))
.retryWhen(errors -> errors.retry())
.subscribeOn(Schedulers.io())
.subscribe(
nextOrSuccess -> System.out.println("nextOrSuccess " + nextOrSuccess),
error -> System.out.println("subscribeError")
);
In code bellow I need to release some resources on unsubscription (where it logs "release").
Observable first = Observable.create(new Observable.OnSubscribe<Object>() {
#Override
public void call(Subscriber<? super Object> subscriber) {
subscriber.add(Subscriptions.create(() -> {
log(“release”);
}));
}
}).doOnUnsubscribe(() -> log(“first”));
Observable second = Observable.create(…).doOnUnsubscribe(() -> log(“second”));
Observable result = first.mergeWith(second).doOnUnsubscribe(() -> log(“result”));
Subscription subscription = result.subscribe(…);
//…
subscription.unsubscribe();
But it logs only “result”. Looks like unsubscription is not propagated to merge’s child observables. So how to handle unsubscription inside of first observable’s Observable.OnSubscribe?
Most of the time, calling unsubscribe has only effect on a live sequence and may not propagate if certain sequences have completed: the operators may not keep their sources around so they can avoid memory leaks. The main idea would be that operators release any resources they manage on termination just before or just after they call their downstream's onError or onCompleted methods, but this is somewhat inconsistent with 1.x.
If you want to make sure resources are releases, look at the using operator which will release your resource upon termination or unsubscription:
Observable.using(
() -> "resource",
r -> Observable.just(r),
r -> System.out.println("Releasing " + r))
.subscribe(System.out::println);