Java Reactor onComplete inconsistency - java

I'm sure that I'm just missing something. I'm running the following code:
#Test
public void simpleCreation() throws Exception {
Iterator<String> data = ImmutableList.of("1", "2", "3").iterator();
Flux<String> stringFlux = Flux.create(emmiter -> {
while ( data.hasNext() ) {
emmiter.next(data.next());
}
emmiter.complete();
});
ConnectableFlux<String> connectableFlux = stringFlux.publish();
connectableFlux.doOnComplete(() -> System.out.println("connectableFlux.doOnComplete"));
stringFlux.doOnComplete(() -> System.out.println("stringFlux.doOnComplete"));
CountDownLatch completeLatch = new CountDownLatch(1);
Disposable disposable = connectableFlux.subscribe(s -> {
System.out.println("subscribe: data: " + s);
}, error -> { }, completeLatch::countDown);
connectableFlux.connect();
completeLatch.await();
disposable.dispose();
}
and expect it to print either "connectableFlux.doOnComplete" or "stringFlux.doOnComplete" or both, but I see neither. OnComplete callback from subscribe method is executed with no problem, but neither of these methods called and I do not quite see why.
For me it looks slightly inconsistent - in one place callback is called and others are just ignored. I can observe the similar behaviour with doOnNext.
I would appreciate if someone can explain the concept behind that. I'm sure that is not bug, but just something I'm missing about the framework or the concept in general.

This line is causing the problem:
connectableFlux.doOnComplete(() -> System.out.println("connectableFlux.doOnComplete"));
The result of the call to doOnComplete() is ignored. The method returns a new version of the Flux instance on which you want to call subscribe(), it does not add the logic to the old connectableFlux instance.
Try it like this:
Iterator<String> data = ImmutableList.of("1", "2", "3").iterator();
Flux<String> stringFlux = Flux.create(emmiter -> {
while (data.hasNext()) {
emmiter.next(data.next());
}
emmiter.complete();
});
stringFlux.doOnComplete(() -> System.out.println("stringFlux.doOnComplete()"))
.subscribe(s -> System.out.println("subscribe: data: " + s), error -> {})
.dispose();
stringFlux.publish().connect();

I can't add this as a comment, so sorry about bumping an old question. Just wanted to share this bit of the official Reactor guide:
B.2. I used an operator on my Flux but it doesn’t seem to apply. What gives?
Make sure that the variable you .subscribe() to has been affected by the operators you think should have been applied to it.
Reactor operators are decorators. They return a different instance that wraps the source sequence and add behavior. That is why the preferred way of using operators is to chain the calls.
Compare the following two examples:
without chaining (incorrect)
Flux<String> flux = Flux.just("foo", "chain");
flux.map(secret -> secret.replaceAll(".", "*"));
flux.subscribe(next -> System.out.println("Received: " + next));
The mistake is here. The result isn’t attached to the flux variable.
without chaining (correct)
Flux<String> flux = Flux.just("foo", "chain");
flux = flux.map(secret -> secret.replaceAll(".", "*"));
flux.subscribe(next -> System.out.println("Received: " + next));
This sample is even better (because it’s simpler):
with chaining (best)
Flux<String> secrets = Flux
.just("foo", "chain")
.map(secret -> secret.replaceAll(".", "*"))
.subscribe(next -> System.out.println("Received: " + next));
The first version will output:
Received: foo
Received: chain
Whereas the two other versions will output the expected:
Received: ***
Received: *****
https://projectreactor.io/docs/core/release/reference/#faq.chain

Related

Mutiny - How to group items to send request by blocks

I'm using Mutiny extension (for Quarkus) and I don't know how to manage this problem.
I want to send many request in an async way so I've read about Mutiny extension. But the server closes the connection because it receives thousand of them.
So I need:
Send the request by blocks
After all request are sent, do things.
I've been using Uni object to combine all the responses as this:
Uni<Map<Integer, String>> uniAll = Uni.combine()
.all()
.unis(list)
.combinedWith(...);
And then:
uniAll.subscribe()
.with(...);
This code, send all the request in paralell so the server closes the connection.
I'm using group of Multi objects, but I don't know how to use it (in Mutiny docs I can't found any example).
This is the way I'm doing now:
//Launch 1000 request
for (int i=0;i<1000;i++) {
multi = client.getAbs("https://api.*********.io/jokes/random")
.as(BodyCodec.jsonObject())
.send()
.onItem().transformToMulti(
array -> Multi.createFrom()
.item(array.body().getString("value")))
.group()
.intoLists()
.of(100)
.subscribe()
.with(a->{
System.out.println("Value: "+a);
});
}
I think that the subscription doesn't execute until there are "100" groups of items, but I guess this is not the way because it doesn't work.
Does anybody know how to launch 1000 of async requests in blocks of 100?
Thanks in advance.
UPDATED 2021-04-19
I've tried with this approach:
List<Uni<String>> listOfUnis = new ArrayList<>();
for (int i=0;i<1000;i++) {
listOfUnis.add(client
.getAbs("https://api.*******.io/jokes/random")
.as(BodyCodec.jsonObject())
.send()
.onItem()
.transform(item -> item
.body()
.getString("value")));
}
Multi<Uni<String>> multiFormUnis = Multi.createFrom()
.iterable(listOfUnis);
List<String> listOfResponses = new ArrayList<>();
List<String> listOfValues = multiFormUnis.group()
.intoLists()
.of(100)
.onItem()
.transformToMultiAndConcatenate(listOfOneHundred ->
{
System.out.println("Size: "+listOfOneHundred.size());
for (int index=0;index<listOfOneHundred.size();index++) {
listOfResponses.add(listOfOneHundred.get(index)
.await()
.indefinitely());
}
return Multi.createFrom()
.iterable(listOfResponses);
})
.collectItems()
.asList()
.await()
.indefinitely();
for (String value : listOfValues) {
System.out.println(value);
}
When I put this line:
listOfResponses.add(listOfOneHundred.get(index)
.await()
.indefinitely());
The responses are printed one after each other, and when the first 100s group of items ends, it prints the next group. The problem? There are sequential requests and it takes so much time
I think I am close to the solution, but I need to know, how to send the parallel request only in group of 100s, because if I put:
subscribe().with()
All the request are sent in parallel (and not in group of 100s)
I think you create the multy wrong, it would be much easier to use this:
Multi<String> multiOfJokes = Multi.createFrom().emitter(multiEmitter -> {
for (int i=0;i<1000;i++) {
multiEmitter.emit(i);
}
multiEmitter.complete();
}).onItem().transformToUniAndMerge(index -> {
return Uni.createFrom().item("String" + index);
})
With this approach it should mace the call parallel.
Now is the question of how to make it to a list.
The grouping works fine
I run it with this code:
Random random = new Random();
Multi<Integer> multiOfInteger = Multi.createFrom().emitter(multiEmitter -> {
for (Integer i=0;i<1000;i++) {
multiEmitter.emit(i);
}
multiEmitter.complete();
});
Multi<String> multiOfJokes = multiOfInteger.onItem().transformToUniAndMerge(index -> {
if (index % 10 == 0 ) {
Duration delay = Duration.ofMillis(random.nextInt(100) + 1);
return Uni.createFrom().item("String " + index + " delayed").onItem()
.delayIt().by(delay);
}
return Uni.createFrom().item("String" + index);
}).onCompletion().invoke(() -> System.out.println("Completed"));
Multi<List<String>> multiListJokes = multiOfJokes
.group().intoLists().of(100)
.onCompletion().invoke(() -> System.out.println("Completed"))
.onItem().invoke(strings -> System.out.println(strings));
multiListJokes.collect().asList().await().indefinitely();
You will get a list of your string.
I don't know, how you intend to send the list to backend.
But you can either to it with:
call (executed asynchronously)
write own subscriber (implements Subscriber) the methods are straight forward.
As you need for your bulk request.
I hope you understand it better afterward.
PS: link to guide where I learned all of it:
https://smallrye.io/smallrye-mutiny/guides
So in short you want to batch parallel calls to the server, without hitting it with everything at once.
Could this work for you? It uses merge. In my example, it has a parallelism of 2.
Multi.createFrom().range(1, 10)
.onItem()
.transformToUni(integer -> {
return <<my long operation Uni>>
})
.merge(2) //this is the concurrency
.collect()
.asList();
I'm not sure if merge was added later this year, but this seems to do what you want. In my example, the "long operation producing Uni" is actually a call to the Microprofile Rest Client which produces a Uni, and returns a string. After the merge you can put another onItem to perform something with the response (it's a plain Multi after the merge), instead of collecting everything as list.

Spring Reactor onErrorContinue not working

As per documentation I am expecting onErrorContinue will ignore the error element and continue the sequence. Below test case is failing with exception
java.lang.AssertionError: expectation "expectNext(12)" failed (expected: onNext(12); actual: onError(java.lang.RuntimeException:
#Test
public void testOnErrorContinue() throws InterruptedException {
Flux<Integer> fluxFromJust = Flux.just(1, 2,3,4,5)
.concatWith(Flux.error(new RuntimeException("Test")))
.concatWith(Flux.just(6))
.map(i->i*2)
.onErrorContinue((e,i)->{
System.out.println("Error For Item +" + i );
})
;
StepVerifier
.create(fluxFromJust)
.expectNext(2, 4,6,8,10)
.expectNext(12)
.verifyComplete();
}
onErrorContinue() may not be doing what you think it does - it lets upstream operators recover from errors that may occur within them, if they happen to support doing so. It's a rather specialist operator.
In this case map() does actually support onErrorContinue, but map isn't actually producing an error - the error has been inserted into the stream already (by way of concat() and the explicit Flux.error() call.) In other words, there's no operator producing the error at all, so there's therefore nothing for it to recover from, as an element supplied is erroneous.
If you changed your stream so that map() actually caused the error, then it would work as expected:
Flux.just(1, 2,3,4,5)
.map(x -> {
if(x==5) {
throw new RuntimeException();
}
return x*2;
})
.onErrorContinue((e,i)->{
System.out.println("Error For Item +" + i );
})
.subscribe(System.out::println);
Produces:
2
4
6
8
Error For Item +5
An alternative depending on the real-world use case may be to use onErrorResume() after the element (or element source) that may be erroneous:
Flux.just(1, 2, 3, 4, 5)
.concatWith(Flux.error(new RuntimeException()))
.onErrorResume(e -> {
System.out.println("Error " + e + ", ignoring");
return Mono.empty();
})
.concatWith(Flux.just(6))
.map(i -> i * 2)
.subscribe(System.out::println);
In general, using another "onError" operator (such as onErrorResume()) is generally the more usual, and more recommended approach, since onErrorContinue() is dependent on operator support and affects upstream, not downstream operators (which is unusual.)

What's the point of .switchIfEmpty() getting evaluated eagerly?

Even if my stream is not empty, the fallback stream would always be created? What's the intent behind doing this? This is extremely non-idiomatic.
On the other hand, .onErrorResume is evaluated lazily.
Could someone please explain to me why .switchIsEmpty is evaluated eagerly?
Here's the code:
public static void main(String[] args) {
Mono<Integer> m = Mono.just(1);
m.flatMap(a -> Mono.delay(Duration.ofMillis(5000)).flatMap(p -> Mono.empty()))
.switchIfEmpty(getFallback())
.doOnNext(a -> System.out.println(a))
.block();
}
private static Mono<Integer> getFallback() {
System.out.println("In Here");
return Mono.just(5);
}
The output is:
In Here (printed immediately)
5 (after 5s)
What you need to understand here is the difference between assembly time and subscription time.
Assembly time is when you create your pipeline by building the operator chain. At this point your publisher is not subscribed yet and you need to think kind of imperatively.
Subscription time is when you trigger the execution by subscribing and the data starts flow through your pipeline. This is when you need to think reactively in terms of callbacks, lambdas, lazy execution, etc..
More on this in the great article by Simon Baslé.
As #akarnokd mentioned in his answer, the getFallback() method is called imperatively at assembly time since it is not defined as a lambda, just a regular method call.
You can achieve true laziness by one of the below methods:
1, You can use Mono.fromCallable and put your log inside the lambda:
public static void main(String[] args) {
Mono<Integer> m = Mono.just(1);
m.flatMap(a -> Mono.delay(Duration.ofMillis(5000)).flatMap(p -> Mono.empty()))
.switchIfEmpty(getFallback())
.doOnNext(a -> System.out.println(a))
.block();
}
private static Mono<Integer> getFallback() {
System.out.println("Assembly time, here we are just in the process of creating the mono but not triggering it. This is always called regardless of the emptiness of the parent Mono.");
return Mono.fromCallable(() -> {
System.out.println("Subscription time, this is the moment when the publisher got subscribed. It is got called only when the Mono was empty and fallback needed.");
return 5;
});
}
2, You can use Mono.defer and delay the execution and the assembling of your inner Mono until subscription:
public static void main(String[] args) {
Mono<Integer> m = Mono.just(1);
m.flatMap(a -> Mono.delay(Duration.ofMillis(5000)).flatMap(p -> Mono.empty()))
.switchIfEmpty(Mono.defer(() -> getFallback()))
.doOnNext(a -> System.out.println(a))
.block();
}
private static Mono<Integer> getFallback() {
System.out.println("Since we are using Mono.defer in the above pipeline, this message gets logged at subscription time.");
return Mono.just(5);
}
Note that your original solution is also perfectly fine. You just need to aware of that the code before returning the Mono is executed at assembly time.
If you put parenthesis around it, why would it execute anywhere else? This type of misunderstanding comes up quite often and not sure what the source is.
What happens should become more apparent when your code is rewritten:
Mono<Integer> m = Mono.just(1);
Mono<Integer> m2 = m.flatMap(a -> Mono.delay(Duration.ofMillis(5000))
.flatMap(p -> Mono.empty()));
Mono<Integer> theFallback = getFallback(); // <------------------ still on the main thread!
m2.switchIfEmpty(theFallback)
.doOnNext(a -> System.out.println(a))
.block();
getFallback runs because its parent method is executing right there. This has nothing to do with Reactive Programming but is a fundamental property of most programming languages.
This strongly reminders me of java.util.Optional. For example:
String input = "not null"; // change to null
String result = Optional.ofNullable(input)
.orElse(fallback());
System.out.println(result);
private static String fallback() {
System.out.println("inside fallback");
return "fallback";
}
No matter the value of input (null or not), it still evaluates fallback method. Unlike Mono though, Optional offers orElseGet that is evaluated lazily, via a java.util.Function. Doing .switchIfEmpty(Mono.defer(() -> getFallback())) is weird, at best, imo.

RxJava sorted output from parallell computation

I have a list of tasks I want to perform in parallell, but I want to display the result of the tasks in the same order as the original list.
In other words, if I have task list [A,B,C], I do not wish to show B-result before I have shown A-result, but nor do I want to wait until A-task is finished before starting B-task.
Additionally, I want to show each result as soon as possible, in other words, if the tasks finish in the order B, then A, then C, I do not want to show anything when I receive B-result, then show A-result immediately followed by B-result when I receive A-result, then show C-result whenever I receive it.
This is of course not terribly tricky to do by making an Observable for each task, combining them with merge, and subscribing on a computation thread pool, then writing a Subscriber which holds a buffer for any results received out of order. However, the Rx rule of thumb tends to be "there's already an operator for that", so the question is "what is the proper RxJava way to solve this?" if indeed there is such a thing.
It seems you need concatEager for this task but it is somewhat possible to achieve it with pre 1.0.15 tools and no need for "creating" Observables. Here is an example for that:
Observable<Long> source1 = Observable.interval(100, 100, TimeUnit.MILLISECONDS).take(10);
Observable<Long> source2 = Observable.interval(100, 100, TimeUnit.MILLISECONDS).take(20);
Observable<Long> source3 = Observable.interval(100, 100, TimeUnit.MILLISECONDS).take(15);
Observable<Observable<Long>> sources = Observable.just(source1, source2, source3);
sources.map(v -> {
Observable<Long> c = v.cache();
c.subscribe(); // to cache all
return c;
})
.onBackpressureBuffer() // make sure all source started
.concatMap(v -> v)
.toBlocking()
.forEach(System.out::println);
The drawback is that it retains all values for the whole duration of the sequence. This can be fixed with a special kind of Subject: UnicastSubject but RxJava 1.x doesn't have one and may not get one "officially". You can, however, look at one of my blog posts and build if for yourself and have the following code:
//...
sources.map(v -> {
UnicastSubject<Long> subject = UnicastSubject.create();
v.subscribe(subject);
return subject;
})
//...
"There's not quite an operator for that". Although, in the 1.0.15-SNAPSHOT build there is an experimental concatEagar() operator those sounds like it does what you're looking for. Pull request for concatEager
repositories {
maven { url 'https://oss.jfrog.org/libs-snapshot' }
}
dependencies {
compile 'io.reactivex:rxjava:1.0.15-SNAPSHOT'
}
If you want to roll your own temporary solution until concatEager() gets the nod of approval. You could try something like this:
public Observable<Result> concatEager(final Observable<Result> taskA, final Observable<Result> taskB, final Observable<Result> taskC) {
return Observable
.create(subscriber -> {
final Observable<Result> taskACached = taskA.cache();
final Observable<Result> taskBCached = taskB.cache();
final Observable<Result> taskCCached = taskC.cache();
// Kick off all the tasks simultaneously.
subscriber.add(
Observable
.merge(taskACached, taskBCached, taskCCached)
.subscribe(
result -> { // Throw away result
},
throwable -> { // Ignore errors
}
)
);
// Put the results in order.
subscriber.add(
Observable
.concat(taskACached, taskBCached, taskCCached)
.subscribe(subscriber)
);
});
}
Note that the above code is totally untested. There are probably better ways to do this but this is what first came to mind...

How can I return an RxJava Observable which is guaranteed not to throw OnErrorNotImplementedException?

I want to create a pattern in my application where all Observable<T> objects that are returned have some default error handling, meaning that the subscribers may use the .subscribe(onNext) overload without fear of the application crashing. (Normally you'd have to use .subscribe(onNext, onError)). Is there any way to acheive this?
I've tried attaching to the Observable by using onErrorReturn, doOnError and onErrorResumeNext - without any of them helping my case. Maybe I'm doing it wrong, but I still get rx.exceptions.OnErrorNotImplementedException if an error occurs within the Observable.
Edit 1: This is example of an Observable that emits an error, which I want to handle in some middle layer:
Observable.create(subscriber -> {
subscriber.onError(new RuntimeException("Somebody set up us the bomb"));
});
Edit 2: I've tried this code to handle the error on behalf of the consumer, but I still get OnErrorNotImplementedException:
// obs is set by the method illustrated in edit 1
obs = obs.onErrorResumeNext(throwable -> {
System.out.println("This error is handled by onErrorResumeNext");
return null;
});
obs = obs.doOnError(throwable -> System.out.println("A second attempt at handling it"));
// Consumer code:
obs.subscribe(
s -> System.out.println("got: " + s)
);
This will work - the key was to return Observable.empty();
private <T> Observable<T> attachErrorHandler(Observable<T> obs) {
return obs.onErrorResumeNext(throwable -> {
System.out.println("Handling error by printint to console: " + throwable);
return Observable.empty();
});
}
// Use like this:
Observable<String> unsafeObs = getErrorProducingObservable();
Observable<String> safeObservable = attachErrorHandler(unsafeObs);
// This call will now never cause OnErrorNotImplementedException
safeObservable.subscribe(s -> System.out.println("Result: " + s));

Categories