Testing internal Flux.interval - java

I want to provide some data using Reactor's Flux. Since it may take a lot of time to provide this data, I decided to introduce a ping mechanism (e.g. to keep tcp connection alive and not get timeouts). Here is my simplified solution:
public class Example {
private final DataProvider dataProvider;
public Example(DataProvider dataProvider) {
this.dataProvider = dataProvider;
}
public Flux<String> getData() {
AtomicBoolean inProgress = new AtomicBoolean(true);
Flux<String> dataFlux = dataProvider.provide()
.doFinally(ignoreIt -> inProgress.set(false));
return dataFlux.mergeWith(ping(inProgress::get));
}
private Publisher<String> ping(Supplier<Boolean> inProgress) {
return Flux.interval(Duration.ofSeconds(1), Duration.ofSeconds(1))
.map((tick) -> "ping " + tick)
.takeWhile(ignoreIt -> inProgress.get());
}
interface DataProvider {
Flux<String> provide();
}
public static void main(String[] args) {
Callable<String> dataProviderLogic = () -> {
Thread.sleep(3500);
return "REAL DATA - SHOULD TERMINATE PING";
};
// wrapping synchronous call
DataProvider dataProvider = () -> Mono.fromCallable(dataProviderLogic)
.subscribeOn(Schedulers.boundedElastic())
.flux();
new Example(dataProvider).getData()
.doOnNext(data -> System.out.println("GOT: " + data))
.blockLast();
}
}
Above code prints on console:
GOT: ping 0
GOT: ping 1
GOT: ping 2
GOT: REAL DATA - SHOULD TERMINATE PING
So it works as expected.
The question is: how can I test this ping mechanism in a Junit5 test, so it won't take a lot of time (e.g. several seconds)?
In an ideal world I would like to write a test which imitates a delay for the data provision, check if expected number of pings was generated and verify if complete signal was emitted (to make sure that ping flux terminates as expected). Of course I would like to have a unit test, which can be run in ms.
I tried this, but with no luck:
#Test
void test() {
TestPublisher<String> publisher = TestPublisher.create();
Flux<String> data = new Example(publisher::flux).getData();
StepVerifier.withVirtualTime(() -> data)
.thenAwait(Duration.ofMillis(3500))
.then(() -> publisher.emit("REAL DATA - SHOULD TERMINATE PING"))
.then(publisher::complete)
.expectNextCount(4)
.verifyComplete();
}
Above test ends up with this error:
java.lang.AssertionError: expectation "expectNextCount(4)" failed (expected: count = 4; actual: counted = 1; signal: onComplete())
Is it possible at all to use virtual time for internally created Flux.interval?
Any ideas for an alternative ping solution will be appreciated.

Despite of the fact that above ping mechanism is not the best one (I suggest to use Sink instead of AtomicBoolean and use takeUntilOther instead of takeWhile), in my case the problem was probably related to the situation where not all flux instructions were wrapped with withVirtualTime. This code works as expected in the above case:
#Test
void test() {
StepVerifier.withVirtualTime(() -> {
Flux<String> data = Flux.just("REAL DATA - SHOULD TERMINATE PING").delayElements(Duration.ofMillis(3200));
return new Example(() -> data).getData();
})
.thenAwait(Duration.ofMillis(3500))
.expectNextCount(4)
.thenAwait(Duration.ofMillis(1000))
.verifyComplete();
}

Related

Leverage PriorityBlockingQueue to build producer-comsumer pattern in Java Reactor

In my project, there is a Spring scheduler periodically scans "TO BE DONE" tasks from DB, then distributing them to task consumer for subsequent handling. So, the current implementation is to construct a Reactor Sinks between producer and consumer.
Sinks.Many<Task> taskSink = Sinks.many().multicast().onBackpressureBuffer(1000, false);
Producer:
Flux<Date> dates = loadDates();
dates.filterWhen(...)
.concatMap(date -> taskManager.getTaskByDate(date))
.doOnNext(taskSink::tryEmitNext)
.subscribe();
Consumer:
taskProcessor.process(taskSink.asFlux())
.subscribeOn(Schedulers.boundedElastic())
.subscribe();
By using Sink, it works fine for most of cases. But when the system under heavy load, system maintainer would want to know:
How many tasks still sitting in the Sink?
If it is possible to clear all tasks within the Sink.
If it is possible to prioritize tasks within the Sink.
Unfortunately, Sink it's impossible to fulfill all the needs mentioned above.
So, I created a wrapper class that includes a Map and PriorityBlockingQueue. I refrerenced the implementation from this link https://stackoverflow.com/a/71009712/19278017.
After that, the original producer-consumer code revised as below:
Task queue:
MergingQueue<Task> taskQueue = new PriorityMergingQueue();
Producer:
Flux<Date> dates = loadDates();
dates.filterWhen(...)
.concatMap(date -> taskManager.getTaskByDate(date))
.doOnNext(taskQueue::enqueue)
.subscribe();
Consumer:
taskProcessor.process(Flux.create((sink) -> {
sink.onRequest(n -> {
Task task;
try {
while(!sink.isCancel() && n > 0) {
if(task = taskQueue.poll(1, TimeUnit.SECOND) != null) {
sink.next(task);
n--;
}
} catch() {
....
})
.subscribeOn(Schedulers.boundedElastic())
.subscribe();
I got some questions as below:
Will that be an issue the code doing a .poll()? Since, I came across thread hang issue during the longevity testing. Just not sure if it's due to the poll() call.
Is there any alternative solution in Reactor, which works like a PriorityBlockingQueue?
The goal of reactive programming is to avoid blocking operations. PriorityBlockingQueue.poll() will cause issues as it will block the thread waiting for the next element.
There is however an alternative solution in Reactor: the unicast version of Sinks.Many allows using an arbitrary Queue for buffering using Sinks.many().unicast().onBackPressureBuffer(Queue<T>). By using a PriorityQueue instanced outside of the Sink, you can fulfill all three requirements.
Here is a short demo where I emit a Task every 100ms:
public record Task(int prio) {}
private static void log(Object message) {
System.out.println(LocalTime.now(ZoneOffset.UTC).truncatedTo(ChronoUnit.MILLIS) + ": " + message);
}
public void externalBufferDemo() throws InterruptedException {
Queue<Task> taskQueue = new PriorityQueue<>(Comparator.comparingInt(Task::prio).reversed());
Sinks.Many<Task> taskSink = Sinks.many().unicast().onBackpressureBuffer(taskQueue);
taskSink.asFlux()
.delayElements(Duration.ofMillis(100))
.subscribe(task -> log(task));
for (int i = 0; i < 10; i++) {
taskSink.tryEmitNext(new Task(i));
}
// Show amount of tasks sitting in the Sink:
log("Nr of tasks in sink: " + taskQueue.size());
// Clear all tasks in the sink after 350ms:
Thread.sleep(350);
taskQueue.clear();
log("Nr of tasks after clear: " + taskQueue.size());
Thread.sleep(1500);
}
Output:
09:41:11.347: Nr of tasks in sink: 9
09:41:11.450: Task[prio=0]
09:41:11.577: Task[prio=9]
09:41:11.687: Task[prio=8]
09:41:11.705: Nr of tasks after clear: 0
09:41:11.799: Task[prio=7]
Note that delayElements has an internal queue of size 1, which is why Task 0 was picked up before Task 1 was emitted, and why Task 7 was picked up after the clear.
If multicast is required, you can transform your flux using one of the many operators enabling multicasting.

How to measure execution time of webflux WebClient methods?

I prepare a bunch of requests that I want to send in parallel to an external webservice.
In this flow, I continue to process the response directly (eg inserting something into a database).
Problem: I want to track the maximum request time (for one request!), excluding the processing.
But as written, this will only track the global time including any subprocess:
StopWatch watch = new StopWatch();
watch.start();
Flux.fromIterable(requests)
.flatMap(req -> webClient.send(req, MyResponse.class)
.doOnSuccess(rsp -> processResponse(rsp))) //assume some longer routine
.collectList()
.block();
watch.stop();
System.out.println(w.getTotalTimeMillis());
Question: how can I measure the maximum time the requests took, excluding the processResponse() time?
When using elapsed on a mono, you will get back a mono of a tuple with both the elapsed time and the original object in it. You have to unwrap them to use those. I wrote an example (a bit simplified from your code) in a test to see it working:
#Test
public void elapsed() {
Flux.fromIterable(List.of(1, 2, 3, 4, 5))
.flatMap(req -> Mono.delay(Duration.ofMillis(100L * req))
.map(it -> "response_" + req)
.elapsed()
.doOnNext(it -> System.out.println("I took " + it.getT1() + " MS"))
.map(Tuple2::getT2)
.doOnSuccess(rsp -> processResponse(rsp)))
.collectList()
.block();
}
#SneakyThrows
public void processResponse(Object it) {
System.out.println("This is the response: " + it);
Thread.sleep(1000);
}
the output looks like this:
I took 112 MS
This is the response: response_1
I took 205 MS
This is the response: response_2
I took 305 MS
This is the response: response_3
I took 403 MS
This is the response: response_4
I took 504 MS
This is the response: response_5
those numbers represent both the delay (which in your case would be the webClient.send()) and a little overhead from the reactive pipeline itself. It is calculated between subscription (which happens when the flatMap for a specific request runs) and the next signal (the result from the map in my case, in yours the result of the webclient request)
your code would look something like this:
Flux.fromIterable(requests)
.flatMap(req -> webClient.send(req, MyResponse.class)
.elapsed()
.doOnNext(it -> System.out.println("I took " + it.getT1() + " MS"))
.map(Tuple2::getT2)
.doOnSuccess(rsp -> processResponse(rsp))) //assume some longer routine
.collectList()
.block();
note if you want to use a stopwatch in stead, that is also possible by doing something like:
Flux.fromIterable(List.of(1, 2, 3, 4, 5)).flatMap(req -> {
StopWatch stopWatch = new StopWatch();
return Mono.fromRunnable(stopWatch::start)
.then(Mono.delay(Duration.ofMillis(100L * req)).map(it -> "response_" + req).doOnNext(it -> {
stopWatch.stop();
System.out.println("I took " + stopWatch.getTime() + " MS");
}).doOnSuccess(this::processResponse));
}).collectList().block();
but personally I would recommend the .elapsed() solution since it's a bit cleaner.
I would avoid stopwatch directly in that method. Rather create a metrics wrapper which can be used at other places as well.
You can leverage .doOnSubscribe(), .doOnError(), .doOnSuccess()
But to answer your question, you can have a timer something like this
public sendRequest(){
Flux.fromIterable(requests)
.flatMap(req -> webClient.send(req, MyResponse.class)
.transform(timerPublisher("time took for ", req.id)))
.collectList()
.block();
}
//this can be made sophisticated by determining what kind of publisher it is
//mono or flux
private Function<Mono<T>, Publisher<T>> timerPublisher(String metric) {
StopWatchHelper stopWatch = new StopWatchHelper(metric);
return s -> s.doOnSubscribe((s) -> stopWatch.start())
.doOnSuccess(documentRequest -> stopWatch.record())
.doOnError(stopWatch::record);
}
private class StopWatchHelper{
private StopWatch stopWatch;
private String metric;
public StopWatchHelper(String metric){
this.metric = metric;
stopWatch = new StopWatch();
}
public Consumer<Subscription> start() {
return (s) -> stopWatch.start();
}
public void record(){
if(stopWatch.isStarted()){
System.out.println(String.format("Metric %s took %s", metric, stopWatch.getTime()));
}
}
public void record(Throwable t){
if(stopWatch.isStarted()){
System.out.println(String.format("Metric %s took %s, reported in error %s", metric, stopWatch.getTime(),throwable));
}
}
}
PS: Avoid using .block() -> it beats the purpose :)
Spring boot provides an out of the box feature that will add instrumentation to your WebClient.
You can "enable" these metrics by using the auto-configured WebClient.Builder to create your WebClient instances ie.
#Bean
public WebClient myCustomWebClient(WebClient.Builder builder) {
return builder
// your custom web client config code
.build();
}
This instrumentation will time each individual API call made by your WebClient and register it in your configured MeterRegistry
Reference Docs

How to sequentially chain Vertx CompositeFuture using RXJava?

I need to chain sequentially in order Vertx CompositeFutures in a RxJava style for dependent CompositeFuture, avoiding callback hell.
The use case:
Each CompositeFuture.any/all do some async operations that return futures, lets say myList1, myList2, myList3, but I must wait for CompositeFuture.any(myList1) to complete and return success before doing CompositeFuture.any(myList2), and the same from myList2 to myList3. Naturally, the CompositeFuture itself does the jobs async, but just for its set of operations, since the next set have to be done just after the first set goes well.
Doing it in a "callback-hell style" would be:
public static void myFunc(Vertx vertx, Handler<AsyncResult<CompositeFuture>> asyncResultHandler) {
CompositeFuture.any(myList1 < Future >)
.onComplete(ar1 -> {
if (!ar1.succeeded()) {
asyncResultHandler.handle(ar1);
} else {
CompositeFuture.any(myList2 < Future >)
.onComplete(ar2 -> {
if (!ar2.succeeded()) {
asyncResultHandler.handle(ar2);
} else {
CompositeFuture.all(myList3 < Future >)
.onComplete(ar3 -> {
asyncResultHandler.handle(ar3);
.... <ARROW OF CLOSING BRACKETS> ...
}
Now I tried somenthing like this:
public static void myFunc(Vertx vertx, Handler<AsyncResult<CompositeFuture>> asyncResultHandler) {
Single
.just(CompositeFuture.any(myList1 < Future >))
.flatMap(previousFuture -> rxComposeAny(previousFuture, myList2 < Future >))
.flatMap(previousFuture -> rxComposeAll(previousFuture, myList3 < Future >))
.subscribe(SingleHelper.toObserver(asyncResultHandler));
}
public static Single<CompositeFuture> rxComposeAny(CompositeFuture previousResult, List<Future> myList) {
if (previousResult.failed()) return Single.just(previousResult); // See explanation bellow
CompositeFuture compositeFuture = CompositeFuture.any(myList);
return Single.just(compositeFuture);
}
public static Single<CompositeFuture> rxComposeAll(CompositeFuture previousResult, List<Future> myList) {
if (previousResult.failed()) return Single.just(previousResult);
CompositeFuture compositeFuture = CompositeFuture.any(myList);
return Single.just(compositeFuture);
}
}
Much more compact and clear. But, I am not succeeding in passing the previous fails to the asyncResultHandler.
My idea was as follows: The flatMap passes the previous CompositeFuture result and I want to check if it failed. The next rxComposeAny/All first checks to see if previous failed, if so, just returns the failed CompositeFuture and so on until it hits the handler in the subscriber. If the previous passed the test, I`m ok to continue passing the current result till the last successful CompositeFuture hits the handler.
The problem is that the check
if (previousResult.failed()) return Single.just(previousResult); // See explanation bellow
doesn't work, and all the CompositeFutures are processed, but not tested for successful completion, just the last one ends up being passed to the asyncResultHandler which will test for overall failure (but in the case of my code, it ends up cheking just the last one)
I`m using Vertx 3.9.0 and RxJava 2 Vertx API.
Disclosure: I have experience in Vertx, but I'm totally new in RxJava. So I appreciate any answer, from technical solutions to conceptual explanations.
Thank you.
EDIT (after excellent response of #homerman):
I need to have the exact same behavior of the "callback hell style" of sequentially dependent CompositeFutures, ie, the next must be called after onComplete and test for completed with failure or success. The complexity comes from the fact that:
I have to use vertx CompositeAll/Any methods, not zip. Zip provides behaviour similar to CompositeAll, but not CompositeAny.
CompositeAll/Any return the completed future just inside onComplete method. If I check it before as showed above, since it is async, I will get unresolved futures.
CompositeAll/Any if failed will not throw error, but failed future inside onComplete, so I cannot use onError from rxJava.
For example, I tried the following change in the rxComposite function:
public static Single<CompositeFuture> rxLoadVerticlesAny(CompositeFuture previousResult, Vertx vertx, String deploymentName,
List<Class<? extends Verticle>> verticles, JsonObject config) {
previousResult.onComplete(event -> {
if (event.failed()) {
return Single.just(previousResult);
} else {
CompositeFuture compositeFuture = CompositeFuture.any(VertxDeployHelper.deploy(vertx, verticles, config));
return Single.just(compositeFuture);
}
}
);
}
But naturally it does not compile, since lambda is void. How can I reproduce this exact same behavior it rxJava in Vertx?
Just to clarify something...
Each CompositeFuture.any/all do some async operations that return
futures, lets say myList1, myList2, myList3, but I must wait for
CompositeFuture.any(myList1) to complete and return success before
doing CompositeFuture.any(myList2), and the same from myList2 to
myList3.
You've offered CompositeFuture.any() and CompositeFuture.all() as points of reference, but the behavior you describe is consistent with all(), which is to say the resulting composite will yield success only if all its constituents do.
For the purpose of my answer, I'm assuming all() is the behavior you expect.
In RxJava, an unexpected error triggered by an exception will result in termination of the stream with the underlying exception being delivered to the observer via the onError() callback.
As a small demo, assume the following setup:
final Single<String> a1 = Single.just("Batch-A-Operation-1");
final Single<String> a2 = Single.just("Batch-A-Operation-2");
final Single<String> a3 = Single.just("Batch-A-Operation-3");
final Single<String> b1 = Single.just("Batch-B-Operation-1");
final Single<String> b2 = Single.just("Batch-B-Operation-2");
final Single<String> b3 = Single.just("Batch-B-Operation-3");
final Single<String> c1 = Single.just("Batch-C-Operation-1");
final Single<String> c2 = Single.just("Batch-C-Operation-2");
final Single<String> c3 = Single.just("Batch-C-Operation-3");
Each Single represents a discrete operation to be performed, and they are logically named according to some logical grouping (ie they are meant to be executed together). For example, "Batch-A" corresponds to your "myList1", "Batch-B" to your "myList2", ...
Assume the following stream:
Single
.zip(a1, a2, a3, (s, s2, s3) -> {
return "A's completed successfully";
})
.flatMap((Function<String, SingleSource<String>>) s -> {
throw new RuntimeException("B's failed");
})
.flatMap((Function<String, SingleSource<String>>) s -> {
return Single.zip(c1, c2, c3, (one, two, three) -> "C's completed successfully");
})
.subscribe(
s -> System.out.println("## onSuccess(" + s + ")"),
t -> System.out.println("## onError(" + t.getMessage() + ")")
);
(If you're not familiar, the zip() operator can be used to combine the results of all the sources supplied as input to emit another/new source).
In this stream, because the processing of the B's ends up throwing an exception:
the stream is terminated during the execution of the B's
the exception is reported to the observer (ie the onError() handler is triggered)
the C's are never processed
If what you want, however, is to decide for yourself whether or not to execute each branch, one approach you could take is to pass the results from previous operations down the stream using some sort of state holder, like so:
class State {
final String value;
final Throwable error;
State(String value, Throwable error) {
this.value = value;
this.error = error;
}
}
The stream could then be modified to conditionally execute different batches, for example:
Single
.zip(a1, a2, a3, (s, s2, s3) -> {
try {
// Execute the A's here...
return new State("A's completed successfully", null);
} catch(Throwable t) {
return new State(null, t);
}
})
.flatMap((Function<State, SingleSource<State>>) s -> {
if(s.error != null) {
// If an error occurred upstream, skip this batch...
return Single.just(s);
} else {
try {
// ...otherwise, execute the B's
return Single.just(new State("B's completed successfully", null));
} catch(Throwable t) {
return Single.just(new State(null, t));
}
}
})
.flatMap((Function<State, SingleSource<State>>) s -> {
if(s.error != null) {
// If an error occurred upstream, skip this batch...
return Single.just(s);
} else {
try {
// ...otherwise, execute the C's
return Single.just(new State("C's completed successfully", null));
} catch(Throwable t) {
return Single.just(new State(null, t));
}
}
})
.subscribe(
s -> {
if(s.error != null) {
System.out.println("## onSuccess with error: " + s.error.getMessage());
} else {
System.out.println("## onSuccess without error: " + s.value);
}
},
t -> System.out.println("## onError(" + t.getMessage() + ")")
);
After some research in Vertx source code, I found a public method that the rx version of CompositeFuture uses to convert 'traditional' CompositeFuture to its rx version. The method is io.vertx.reactivex.core.CompositeFuture.newInstance. With this workaround, I could use my traditional method and then convert it to use in the rx chain. This was what I wanted, because it was problematic to change the existing traditional method.
Here is the code with comments:
rxGetConfig(vertx)
.flatMap(config -> {
return rxComposeAny(vertx, config)
.flatMap(r -> rxComposeAny(vertx, config))
.flatMap(r -> rxComposeAll(vertx, config));
})
.subscribe(
compositeFuture -> {
compositeFuture.onSuccess(event -> startPromise.complete());
},
error -> startPromise.fail(error));
public static Single<JsonObject> rxGetConfig(Vertx vertx) {
ConfigRetrieverOptions enrichConfigRetrieverOptions = getEnrichConfigRetrieverOptions();
// the reason we create new vertx is just to get an instance that is rx
// so this ConfigRetriever is from io.vertx.reactivex.config, instead of normal io.vertx.config
ConfigRetriever configRetriever = ConfigRetriever.create(io.vertx.reactivex.core.Vertx.newInstance(vertx), enrichConfigRetrieverOptions);
return configRetriever.rxGetConfig();
}
public static Single<io.vertx.reactivex.core.CompositeFuture> rxComposeAny(Vertx vertx, JsonObject config) {
// instead of adapted all the parameters of myMethodsThatReturnsFutures to be rx compliant,
// we create it 'normally' and the converts bellow to rx CompositeFuture
CompositeFuture compositeFuture = CompositeFuture.any(myMethodsThatReturnsFutures(config));
return io.vertx.reactivex.core.CompositeFuture
.newInstance(compositeFuture)
.rxOnComplete();
}

Rxjava retryWhen called instantly

I'm having a very specific problem or misunderstanding with rxjava that someone hopefully can help with.
I'm running rxjava 2.1.5 and have the following code snippet:
public static void main(String[] args) {
final Observable<Object> observable = Observable.create(emitter -> {
// Code ...
});
observable.subscribeOn(Schedulers.io())
.retryWhen(error -> {
System.out.println("retryWhen");
return error.retry();
}).subscribe(next -> System.out.println("subscribeNext"),
error -> System.out.println("subscribeError"));
}
After executing this, the program prints:
retryWhen
Process finished with exit code 0
My question, and what I don't understand is: Why is retryWhen called instantly upon subscribing to an Observable? The observable does nothing.
What I want is retryWhen to be called when onError is called on the emitter. Am I misunderstanding how rx works?
Thanks!
Adding new snippet:
public static void main(String[] args) throws InterruptedException {
final Observable<Object> observable = Observable.create(emitter -> {
emitter.onNext("next");
emitter.onComplete();
});
final CountDownLatch latch = new CountDownLatch(1);
observable.subscribeOn(Schedulers.io())
.doOnError(error -> System.out.println("doOnError: " + error.getMessage()))
.retryWhen(error -> {
System.out.println("retryWhen: " + error.toString());
return error.retry();
}).subscribe(next -> System.out.println("subscribeNext"),
error -> System.out.println("subscribeError"),
() -> latch.countDown());
latch.await();
}
Emitter onNext and complete is called. DoOnError is never called. Output is:
retryWhen: io.reactivex.subjects.SerializedSubject#35fb3008
subscribeNext
Process finished with exit code 0
retryWhen calls the provided function when an Observer subscribes to it so you have a main sequence accompanied by a sequence that emits the Throwable the main sequence failed with. You should compose a logic onto the Observable you get in this Function so at the end, one Throwable will result in a value on the other end.
Observable.error(new IOException())
.retryWhen(e -> {
System.out.println("Setting up retryWhen");
int[] count = { 0 };
return e
.takeWhile(v -> ++count[0] < 3)
.doOnNext(v -> { System.out.println("Retrying"); });
})
.subscribe(System.out::println, Throwable::printStackTrace);
Since the e -> { } function body is executed for each individual subscriber, you can have a per subscriber state such as retry counter safely.
Using e -> e.retry() has no effect because the input error flow never gets its onError called.
One issue is, that you don't receive any more results because you'r creating a Thread using retryWhen() but your app seems to finish. To see that behaviour you may want to have a while loop to keep your app running.
That actually means that you need to add something like that to the end of your code:
while (true) {}
Another issue is that you dont emit any error in your sample. You need to emit at least one value to call onNext() else it wont repeat because it's waiting for it.
Here's a working example which a value, then it emits an error and repeat. you can use
.retryWhen(errors -> errors)
which is the same as
.retryWhen(errors -> errors.retry())
Working sample:
public static void main(String[] args) {
Observable
.create(e -> {
e.onNext("test");
e.onError(new Throwable("test"));
})
.retryWhen(errors -> errors.retry())
.subscribeOn(Schedulers.io())
.subscribe(
next -> System.out.println("subscribeNext"),
error -> System.out.println("subscribeError"),
() -> System.out.println("onCompleted")
);
while (true) {
}
}
The reason why you need to emit a result is, that Observable needs to emit a value, else it wait until it receives a new one.
This is because onError can only be called onec (in subscribe), but onNext emits 1..* values.
You can check this behaviour by using doOnError() which provides you the error everytime it retrys the Observable.
Observable
.create(e -> e.onError(new Exception("empty")))
.doOnError(e -> System.out.println("error received " + e))
.retryWhen(errors -> errors.retry())
.subscribeOn(Schedulers.io())
.subscribe(
nextOrSuccess -> System.out.println("nextOrSuccess " + nextOrSuccess),
error -> System.out.println("subscribeError")
);

Debounce ignoring timeout in unit tests

I have this method to fetch search result from api
public void fetchSearchResults(Observable<String> searchObservable) {
searchObservable
.filter(search -> !TextUtils.isEmpty(search))
.debounce(700, TimeUnit.MILLISECONDS)
.observeOn(AndroidSchedulers.mainThread())
.doOnNext(search -> getView().showLoader)
.switchMap(search -> apiService.fetchSearchResults(search)) //Api call is executed on an io scheduler
.subscribe(consumer, errorConsumer);
}
and I wrote this JUnit test for this method:
#Test
public void fetchSearchResultsTest() {
TestScheduler testScheduler = new TestScheduler();
Observable<String> nameObservable = Observable.just("","FA")
.concatMap(search -> Observable.just(search).delay(100,
TimeUnit.MILLISECONDS, testScheduler));
testScheduler.advanceTimeBy(100, TimeUnit.MILLISECONDS);
verify(view, never()).showLoader();
testScheduler.advanceTimeBy(100, TimeUnit.MILLISECONDS);
verify(view, never()).showLoader();
}
But the test fails on the last verify statement with message
org.mockito.exceptions.verification.NeverWantedButInvoked
view.showLoader();
I have tried passing a TestScheduler to the debounce operator and setting the default computation scheduler as a TestScheduler through RxJavaPlugins but the result does not change, the test still fails.
If the test is failing then that would mean that the debounce operator is sending the event right through it ignoring timeout passed in it's arguments. I don't know if this correct but this is as far I understand. So, my question is how can I fix this test and control the events from the debounce operator like I'm doing with the source observable with TestSchedulers?
Your test is failing because of the onCompleted() that occurs when the second item is emitted. The documentation says that debounce() will issue the final item immediately that it receives onCompleted().
To make your test work, either concatenate an Observable.never(), or add more items into the pipeline.
Here is an article on using debounce for doing auto-completion.

Categories