retryWhen operator never retries - java

I'm implementing a DB update approach with retrials.. Following the common pattern for retryWhen() operator as explained here: Using Rx Java retryWhen() ..
..But my retry logic never executes. I'm debugging it and can see the breakpoint hitting at place 3 shown below but it never goes back to retry logic at place 2. After place 3, its always going to place 4 which is the onComplete handler.
(Code is using Java 8 lambdas)
I've applied a workaround by removing the retryWhen() block altogether
and now invoking the updateWithRetrials() recursively from subscribe's > onError() block. That is working but I don't like that approach.
Please can anyone suggest what is incorrect when I use retryWhen() operator ?
private void updateWithRetrials(some input x)
{
AtomicBoolean retryingUpdate = new AtomicBoolean(false);
...
// 1- Start from here
Observable.<JsonDocument> just(x).map(x1 -> {
if (retryingUpdate.get())
{
//2. retry logic
}
//doing sth with x1 here
...
return <some observable>;
})
.retryWhen(attempts -> attempts.flatMap(n -> {
Throwable cause = n.getThrowable();
if (cause instanceof <errors of interest>)
{
// 3 - break-point hits here
// retry update in 1 sec again
retryingUpdate.set(true);
return Observable.timer(1, TimeUnit.SECONDS);
}
// fail in all other cases...
return Observable.error(n.getThrowable());
}))
.subscribe(
doc -> {
//.. update was successful
},
onError -> {
//for unhandled errors in retryWhen() block
},
{
// 4. onComplete block
Sysout("Update() call completed.");
}
); //subscribe ends here
}

Your problem is due to some performance optimisation with Observable.just().
This Operator, after emmiting the item, does not check if the subscribtion is not cancelled and sends onComplete on all cases.
Observable.retryWhen (and retry) resubscribes on Error, but terminates when source sends onComplete.
Thus, even if the retry operator resubscribes, it gets onComplete from previous subscription and stops.
You may see, that code below fails (as yours):
#Test
public void testJustAndRetry() throws Exception {
AtomicBoolean throwException = new AtomicBoolean(true);
int value = Observable.just(1).map(v->{
if( throwException.compareAndSet(true, false) ){
throw new RuntimeException();
}
return v;
}).retry(1).toBlocking().single();
}
But if you "don't forget" to check subscription, it Works!:
#Test
public void testCustomJust() throws Exception {
AtomicBoolean throwException = new AtomicBoolean(true);
int value = Observable.create((Subscriber<? super Integer> s) -> {
s.onNext(1);
if (!s.isUnsubscribed()) {
s.onCompleted();
}
}
).map(v -> {
if (throwException.compareAndSet(true, false)) {
throw new RuntimeException();
}
return v;
}).retry(1).toBlocking().single();
Assert.assertEquals(1, value);
}

I suppose the error occurs inside map because it cannot occur in just. This is not how retryWhen works.
Implement your observable using create and make sure no errors occur in map. If any error will be thrown in the create block the retryWhen will be called and the unit of work retried depending on your retry logic.
Observable.create(subscriber -> {
// code that may throw exceptions
}).map(item -> {
// code that will not throw any exceptions
}).retryWhen(...)
...

Related

How to sequentially chain Vertx CompositeFuture using RXJava?

I need to chain sequentially in order Vertx CompositeFutures in a RxJava style for dependent CompositeFuture, avoiding callback hell.
The use case:
Each CompositeFuture.any/all do some async operations that return futures, lets say myList1, myList2, myList3, but I must wait for CompositeFuture.any(myList1) to complete and return success before doing CompositeFuture.any(myList2), and the same from myList2 to myList3. Naturally, the CompositeFuture itself does the jobs async, but just for its set of operations, since the next set have to be done just after the first set goes well.
Doing it in a "callback-hell style" would be:
public static void myFunc(Vertx vertx, Handler<AsyncResult<CompositeFuture>> asyncResultHandler) {
CompositeFuture.any(myList1 < Future >)
.onComplete(ar1 -> {
if (!ar1.succeeded()) {
asyncResultHandler.handle(ar1);
} else {
CompositeFuture.any(myList2 < Future >)
.onComplete(ar2 -> {
if (!ar2.succeeded()) {
asyncResultHandler.handle(ar2);
} else {
CompositeFuture.all(myList3 < Future >)
.onComplete(ar3 -> {
asyncResultHandler.handle(ar3);
.... <ARROW OF CLOSING BRACKETS> ...
}
Now I tried somenthing like this:
public static void myFunc(Vertx vertx, Handler<AsyncResult<CompositeFuture>> asyncResultHandler) {
Single
.just(CompositeFuture.any(myList1 < Future >))
.flatMap(previousFuture -> rxComposeAny(previousFuture, myList2 < Future >))
.flatMap(previousFuture -> rxComposeAll(previousFuture, myList3 < Future >))
.subscribe(SingleHelper.toObserver(asyncResultHandler));
}
public static Single<CompositeFuture> rxComposeAny(CompositeFuture previousResult, List<Future> myList) {
if (previousResult.failed()) return Single.just(previousResult); // See explanation bellow
CompositeFuture compositeFuture = CompositeFuture.any(myList);
return Single.just(compositeFuture);
}
public static Single<CompositeFuture> rxComposeAll(CompositeFuture previousResult, List<Future> myList) {
if (previousResult.failed()) return Single.just(previousResult);
CompositeFuture compositeFuture = CompositeFuture.any(myList);
return Single.just(compositeFuture);
}
}
Much more compact and clear. But, I am not succeeding in passing the previous fails to the asyncResultHandler.
My idea was as follows: The flatMap passes the previous CompositeFuture result and I want to check if it failed. The next rxComposeAny/All first checks to see if previous failed, if so, just returns the failed CompositeFuture and so on until it hits the handler in the subscriber. If the previous passed the test, I`m ok to continue passing the current result till the last successful CompositeFuture hits the handler.
The problem is that the check
if (previousResult.failed()) return Single.just(previousResult); // See explanation bellow
doesn't work, and all the CompositeFutures are processed, but not tested for successful completion, just the last one ends up being passed to the asyncResultHandler which will test for overall failure (but in the case of my code, it ends up cheking just the last one)
I`m using Vertx 3.9.0 and RxJava 2 Vertx API.
Disclosure: I have experience in Vertx, but I'm totally new in RxJava. So I appreciate any answer, from technical solutions to conceptual explanations.
Thank you.
EDIT (after excellent response of #homerman):
I need to have the exact same behavior of the "callback hell style" of sequentially dependent CompositeFutures, ie, the next must be called after onComplete and test for completed with failure or success. The complexity comes from the fact that:
I have to use vertx CompositeAll/Any methods, not zip. Zip provides behaviour similar to CompositeAll, but not CompositeAny.
CompositeAll/Any return the completed future just inside onComplete method. If I check it before as showed above, since it is async, I will get unresolved futures.
CompositeAll/Any if failed will not throw error, but failed future inside onComplete, so I cannot use onError from rxJava.
For example, I tried the following change in the rxComposite function:
public static Single<CompositeFuture> rxLoadVerticlesAny(CompositeFuture previousResult, Vertx vertx, String deploymentName,
List<Class<? extends Verticle>> verticles, JsonObject config) {
previousResult.onComplete(event -> {
if (event.failed()) {
return Single.just(previousResult);
} else {
CompositeFuture compositeFuture = CompositeFuture.any(VertxDeployHelper.deploy(vertx, verticles, config));
return Single.just(compositeFuture);
}
}
);
}
But naturally it does not compile, since lambda is void. How can I reproduce this exact same behavior it rxJava in Vertx?
Just to clarify something...
Each CompositeFuture.any/all do some async operations that return
futures, lets say myList1, myList2, myList3, but I must wait for
CompositeFuture.any(myList1) to complete and return success before
doing CompositeFuture.any(myList2), and the same from myList2 to
myList3.
You've offered CompositeFuture.any() and CompositeFuture.all() as points of reference, but the behavior you describe is consistent with all(), which is to say the resulting composite will yield success only if all its constituents do.
For the purpose of my answer, I'm assuming all() is the behavior you expect.
In RxJava, an unexpected error triggered by an exception will result in termination of the stream with the underlying exception being delivered to the observer via the onError() callback.
As a small demo, assume the following setup:
final Single<String> a1 = Single.just("Batch-A-Operation-1");
final Single<String> a2 = Single.just("Batch-A-Operation-2");
final Single<String> a3 = Single.just("Batch-A-Operation-3");
final Single<String> b1 = Single.just("Batch-B-Operation-1");
final Single<String> b2 = Single.just("Batch-B-Operation-2");
final Single<String> b3 = Single.just("Batch-B-Operation-3");
final Single<String> c1 = Single.just("Batch-C-Operation-1");
final Single<String> c2 = Single.just("Batch-C-Operation-2");
final Single<String> c3 = Single.just("Batch-C-Operation-3");
Each Single represents a discrete operation to be performed, and they are logically named according to some logical grouping (ie they are meant to be executed together). For example, "Batch-A" corresponds to your "myList1", "Batch-B" to your "myList2", ...
Assume the following stream:
Single
.zip(a1, a2, a3, (s, s2, s3) -> {
return "A's completed successfully";
})
.flatMap((Function<String, SingleSource<String>>) s -> {
throw new RuntimeException("B's failed");
})
.flatMap((Function<String, SingleSource<String>>) s -> {
return Single.zip(c1, c2, c3, (one, two, three) -> "C's completed successfully");
})
.subscribe(
s -> System.out.println("## onSuccess(" + s + ")"),
t -> System.out.println("## onError(" + t.getMessage() + ")")
);
(If you're not familiar, the zip() operator can be used to combine the results of all the sources supplied as input to emit another/new source).
In this stream, because the processing of the B's ends up throwing an exception:
the stream is terminated during the execution of the B's
the exception is reported to the observer (ie the onError() handler is triggered)
the C's are never processed
If what you want, however, is to decide for yourself whether or not to execute each branch, one approach you could take is to pass the results from previous operations down the stream using some sort of state holder, like so:
class State {
final String value;
final Throwable error;
State(String value, Throwable error) {
this.value = value;
this.error = error;
}
}
The stream could then be modified to conditionally execute different batches, for example:
Single
.zip(a1, a2, a3, (s, s2, s3) -> {
try {
// Execute the A's here...
return new State("A's completed successfully", null);
} catch(Throwable t) {
return new State(null, t);
}
})
.flatMap((Function<State, SingleSource<State>>) s -> {
if(s.error != null) {
// If an error occurred upstream, skip this batch...
return Single.just(s);
} else {
try {
// ...otherwise, execute the B's
return Single.just(new State("B's completed successfully", null));
} catch(Throwable t) {
return Single.just(new State(null, t));
}
}
})
.flatMap((Function<State, SingleSource<State>>) s -> {
if(s.error != null) {
// If an error occurred upstream, skip this batch...
return Single.just(s);
} else {
try {
// ...otherwise, execute the C's
return Single.just(new State("C's completed successfully", null));
} catch(Throwable t) {
return Single.just(new State(null, t));
}
}
})
.subscribe(
s -> {
if(s.error != null) {
System.out.println("## onSuccess with error: " + s.error.getMessage());
} else {
System.out.println("## onSuccess without error: " + s.value);
}
},
t -> System.out.println("## onError(" + t.getMessage() + ")")
);
After some research in Vertx source code, I found a public method that the rx version of CompositeFuture uses to convert 'traditional' CompositeFuture to its rx version. The method is io.vertx.reactivex.core.CompositeFuture.newInstance. With this workaround, I could use my traditional method and then convert it to use in the rx chain. This was what I wanted, because it was problematic to change the existing traditional method.
Here is the code with comments:
rxGetConfig(vertx)
.flatMap(config -> {
return rxComposeAny(vertx, config)
.flatMap(r -> rxComposeAny(vertx, config))
.flatMap(r -> rxComposeAll(vertx, config));
})
.subscribe(
compositeFuture -> {
compositeFuture.onSuccess(event -> startPromise.complete());
},
error -> startPromise.fail(error));
public static Single<JsonObject> rxGetConfig(Vertx vertx) {
ConfigRetrieverOptions enrichConfigRetrieverOptions = getEnrichConfigRetrieverOptions();
// the reason we create new vertx is just to get an instance that is rx
// so this ConfigRetriever is from io.vertx.reactivex.config, instead of normal io.vertx.config
ConfigRetriever configRetriever = ConfigRetriever.create(io.vertx.reactivex.core.Vertx.newInstance(vertx), enrichConfigRetrieverOptions);
return configRetriever.rxGetConfig();
}
public static Single<io.vertx.reactivex.core.CompositeFuture> rxComposeAny(Vertx vertx, JsonObject config) {
// instead of adapted all the parameters of myMethodsThatReturnsFutures to be rx compliant,
// we create it 'normally' and the converts bellow to rx CompositeFuture
CompositeFuture compositeFuture = CompositeFuture.any(myMethodsThatReturnsFutures(config));
return io.vertx.reactivex.core.CompositeFuture
.newInstance(compositeFuture)
.rxOnComplete();
}

How to enforce timeout and cancel async CompletableFuture Jobs

I am using Java 8, and I want to know the recommended way to enforce timeout on 3 async jobs that I would to execute async and retrieve the result from the future. Note that the timeout is the same for all 3 jobs. I also want to cancel the job if it goes beyond time limit.
I am thinking something like this:
// Submit jobs async
List<CompletableFuture<String>> futures = submitJobs(); // Uses CompletableFuture.supplyAsync
List<CompletableFuture<Void>> all = CompletableFuture.allOf(futures.toArray(new CompletableFuture[0]));
try {
allFutures.get(100L, TimeUnit.MILLISECONDS);
} catch (TimeoutException e){
for(CompletableFuture f : future) {
if(!f.isDone()) {
/*
From Java Doc:
#param mayInterruptIfRunning this value has no effect in this
* implementation because interrupts are not used to control
* processing.
*/
f.cancel(true);
}
}
}
List<String> output = new ArrayList<>();
for(CompeletableFuture fu : futures) {
if(!fu.isCancelled()) { // Is this needed?
output.add(fu.join());
}
}
return output;
Will something like this work? Is there a better way?
How to cancel the future properly? Java doc says, thread cannot be interrupted? So, if I were to cancel a future, and call join(), will I get the result immediately since the thread will not be interrupted?
Is it recommended to use join() or get() to get the result after waiting is over?
It is worth noting that calling cancel on CompletableFuture is effectively the same as calling completeExceptionally on the current stage. The cancellation will not impact prior stages. With that said:
In principle, something like this will work assuming upstream cancellation is not necessary (from a pseudocode perspective, the above has syntax errors).
CompletableFuture cancellation will not interrupt the current thread. Cancellation will cause all downstream stages to be triggered immediately with a CancellationException (will short circuit the execution flow).
'join' and 'get' are effectively the same in the case where the caller is willing to wait indefinitely. Join handles wrapping the checked Exceptions for you. If the caller wants to timeout, get will be needed.
Including a segment to illustrate the behavior on cancellation. Note how downstream processes will not be started, but upstream processes continue even after cancellation.
public static void main(String[] args) throws Exception
{
int maxSleepTime = 1000;
Random random = new Random();
AtomicInteger value = new AtomicInteger();
List<String> calculatedValues = new ArrayList<>();
Supplier<String> process = () -> { try { Thread.sleep(random.nextInt(maxSleepTime)); System.out.println("Stage 1 Running!"); } catch (InterruptedException e) { e.printStackTrace(); } return Integer.toString(value.getAndIncrement()); };
List<CompletableFuture<String>> stage1 = IntStream.range(0, 10).mapToObj(val -> CompletableFuture.supplyAsync(process)).collect(Collectors.toList());
List<CompletableFuture<String>> stage2 = stage1.stream().map(Test::appendNumber).collect(Collectors.toList());
List<CompletableFuture<String>> stage3 = stage2.stream().map(Test::printIfCancelled).collect(Collectors.toList());
CompletableFuture<Void> awaitAll = CompletableFuture.allOf(stage2.toArray(new CompletableFuture[0]));
try
{
/*Wait 1/2 the time, some should be complete. Some not complete -> TimeoutException*/
awaitAll.get(maxSleepTime / 2, TimeUnit.MILLISECONDS);
}
catch(TimeoutException ex)
{
for(CompletableFuture<String> toCancel : stage2)
{
boolean irrelevantValue = false;
if(!toCancel.isDone())
toCancel.cancel(irrelevantValue);
else
calculatedValues.add(toCancel.join());
}
}
System.out.println("All futures Cancelled! But some Stage 1's may still continue printing anyways.");
System.out.println("Values returned as of cancellation: " + calculatedValues);
Thread.sleep(maxSleepTime);
}
private static CompletableFuture<String> appendNumber(CompletableFuture<String> baseFuture)
{
return baseFuture.thenApply(val -> { System.out.println("Stage 2 Running"); return "#" + val; });
}
private static CompletableFuture<String> printIfCancelled(CompletableFuture<String> baseFuture)
{
return baseFuture.thenApply(val -> { System.out.println("Stage 3 Running!"); return val; }).exceptionally(ex -> { System.out.println("Stage 3 Cancelled!"); return ex.getMessage(); });
}
If it is necessary to cancel the upstream process (ex: cancel some network call), custom handling will be needed.
After calling cancel you cannot join the furure, since you get an exception.
One way to terminate the computation is to let it have a reference to the future and check it periodically: if it was cancelled abort the computation from inside.
This can be done if the computaion is a loop where at each iteration you can do the check.
Do you need it to be a CompletableFuture? Cause another way is to avoid to use a CompleatableFuture, and use a simple Future or a FutureTask instead: if you execute it with an Executor calling future.cancel(true) will terminate the computation if possbile.
Answerring to the question: "call join(), will I get the result immediately".
No you will not get it immediately, it will hang and wait to complete the computation: there is no way to force a computation that takes a long time to complete in a shorter time.
You can call future.complete(value) providing a value to be used as default result by other threads that have a reference to that future.

Throwing exceptions from within CompletableFuture exceptionally clause

I am having a problem dealing with exceptions throw from CompletableFuture methods. I thought it should be possible to throw an exception from within the exceptionally clause of a CompletableFuture. For example, in the method below, I expected that executeWork would throw a RuntimeException because I am throwing one in the various exceptionally clauses, however, this does not work and I'm not sure why.
public void executeWork() {
service.getAllWork().thenAccept(workList -> {
for (String work: workList) {
service.getWorkDetails(work)
.thenAccept(a -> sendMessagetoQueue(work, a))
.exceptionally(t -> {
throw new RuntimeException("Error occurred looking up work details");
});
}
}).exceptionally(t -> {
throw new RuntimeException("Error occurred retrieving work list");
});
}
You're doing a few things wrong here (async programming is hard):
First, as #VGR noted, executeWork() will not throw an exception when things go bad - because all the actual work is done on another thread. executeWork() will actually return immediately - after scheduling all the work but without completing any of it. You can call get() on the last CompletableFuture, which will then wait for the work completion, or failure, and will throw any relevant exceptions. But that is force-syncing and considered an anti-pattern.
Secondly, you don't need to throw new RuntimeException() from the exceptionally() handle - that one is actually called with the correct error (t) in your case.
looking at an analogous synchronous code, your sample looks something like this:
try {
for (String work : service.getAllWork()) {
try {
var a = service.getWorkDetails(work);
sendMessageToQueue(work, a);
} catch (SomeException e) {
throw new RuntimeException("Error occurred looking up work details");
}
}
} catch (SomeOtherException e) {
throw new RuntimeException("Error occured retrieving work list");
}
So as you can see, there's no benefit from catching the exceptions and throwing RuntimeException (which also hides the real error) instead of just letting the exceptions propagate to where you can handle them.
The purpose of an exceptionally() step is to recover from exceptions - such as putting default values when retrieving data from user or IO has failed, or similar things. Example:
service.getAllWork().thenApply(workList -> workList.stream()
.map(work -> service.getWorkDetails(work)
.thenAccept(a -> sendMessageToQueue(work, a)
.exceptionally(e -> {
reportWorkFailureToQueue(work, e);
return null;
})
)
).thenCompose(futStream ->
CompletableFuture.allOf(futStream.toArray(CompletableFuture[]::new)))
.exceptionlly(e -> {
// handle getAllWork() failures here, getWorkDetail/sendMessageToQueue
// failures were resolved by the previous exceptionally and converted to null values
});

Rxjava retryWhen called instantly

I'm having a very specific problem or misunderstanding with rxjava that someone hopefully can help with.
I'm running rxjava 2.1.5 and have the following code snippet:
public static void main(String[] args) {
final Observable<Object> observable = Observable.create(emitter -> {
// Code ...
});
observable.subscribeOn(Schedulers.io())
.retryWhen(error -> {
System.out.println("retryWhen");
return error.retry();
}).subscribe(next -> System.out.println("subscribeNext"),
error -> System.out.println("subscribeError"));
}
After executing this, the program prints:
retryWhen
Process finished with exit code 0
My question, and what I don't understand is: Why is retryWhen called instantly upon subscribing to an Observable? The observable does nothing.
What I want is retryWhen to be called when onError is called on the emitter. Am I misunderstanding how rx works?
Thanks!
Adding new snippet:
public static void main(String[] args) throws InterruptedException {
final Observable<Object> observable = Observable.create(emitter -> {
emitter.onNext("next");
emitter.onComplete();
});
final CountDownLatch latch = new CountDownLatch(1);
observable.subscribeOn(Schedulers.io())
.doOnError(error -> System.out.println("doOnError: " + error.getMessage()))
.retryWhen(error -> {
System.out.println("retryWhen: " + error.toString());
return error.retry();
}).subscribe(next -> System.out.println("subscribeNext"),
error -> System.out.println("subscribeError"),
() -> latch.countDown());
latch.await();
}
Emitter onNext and complete is called. DoOnError is never called. Output is:
retryWhen: io.reactivex.subjects.SerializedSubject#35fb3008
subscribeNext
Process finished with exit code 0
retryWhen calls the provided function when an Observer subscribes to it so you have a main sequence accompanied by a sequence that emits the Throwable the main sequence failed with. You should compose a logic onto the Observable you get in this Function so at the end, one Throwable will result in a value on the other end.
Observable.error(new IOException())
.retryWhen(e -> {
System.out.println("Setting up retryWhen");
int[] count = { 0 };
return e
.takeWhile(v -> ++count[0] < 3)
.doOnNext(v -> { System.out.println("Retrying"); });
})
.subscribe(System.out::println, Throwable::printStackTrace);
Since the e -> { } function body is executed for each individual subscriber, you can have a per subscriber state such as retry counter safely.
Using e -> e.retry() has no effect because the input error flow never gets its onError called.
One issue is, that you don't receive any more results because you'r creating a Thread using retryWhen() but your app seems to finish. To see that behaviour you may want to have a while loop to keep your app running.
That actually means that you need to add something like that to the end of your code:
while (true) {}
Another issue is that you dont emit any error in your sample. You need to emit at least one value to call onNext() else it wont repeat because it's waiting for it.
Here's a working example which a value, then it emits an error and repeat. you can use
.retryWhen(errors -> errors)
which is the same as
.retryWhen(errors -> errors.retry())
Working sample:
public static void main(String[] args) {
Observable
.create(e -> {
e.onNext("test");
e.onError(new Throwable("test"));
})
.retryWhen(errors -> errors.retry())
.subscribeOn(Schedulers.io())
.subscribe(
next -> System.out.println("subscribeNext"),
error -> System.out.println("subscribeError"),
() -> System.out.println("onCompleted")
);
while (true) {
}
}
The reason why you need to emit a result is, that Observable needs to emit a value, else it wait until it receives a new one.
This is because onError can only be called onec (in subscribe), but onNext emits 1..* values.
You can check this behaviour by using doOnError() which provides you the error everytime it retrys the Observable.
Observable
.create(e -> e.onError(new Exception("empty")))
.doOnError(e -> System.out.println("error received " + e))
.retryWhen(errors -> errors.retry())
.subscribeOn(Schedulers.io())
.subscribe(
nextOrSuccess -> System.out.println("nextOrSuccess " + nextOrSuccess),
error -> System.out.println("subscribeError")
);

Proper termination of a stuck Couchbase Observable

I'm trying to delete a batch of couchbase documents in rapid fashion according to some constraint (or update the document if the constraint isn't satisfied). Each deletion is dubbed a "parcel" according to my terminology.
When executing, I run into a very strange behavior - the thread in charge of this task starts working as expected for a few iterations (at best). After this "grace period", couchbase gets "stuck" and the Observable doesn't call any of its Subscriber's methods (onNext, onComplete, onError) within the defined period of 30 seconds.
When the latch timeout occurs (see implementation below), the method returns but the Observable keeps executing (I noticed that when it kept printing debug messages when stopped with a breakpoint outside the scope of this method).
I suspect couchbase is stuck because after a few seconds, many Observables are left in some kind of a "ghost" state - alive and reporting to their Subscriber, which in turn have nothing to do because the method in which they were created has already finished, eventually leading to java.lang.OutOfMemoryError: GC overhead limit exceeded.
I don't know if what I claim here makes sense, but I can't think of another reason for this behavior.
How should I properly terminate an Observable upon timeout? Should I? Any other way around?
public List<InfoParcel> upsertParcels(final Collection<InfoParcel> parcels) {
final CountDownLatch latch = new CountDownLatch(parcels.size());
final List<JsonDocument> docRetList = new LinkedList<JsonDocument>();
Observable<JsonDocument> obs = Observable
.from(parcels)
.flatMap(parcel ->
Observable.defer(() ->
{
return bucket.async().get(parcel.key).firstOrDefault(null);
})
.map(doc -> {
// In-memory manipulation of the document
return updateDocs(doc, parcel);
})
.flatMap(doc -> {
boolean shouldDelete = ... // Decide by inner logic
if (shouldDelete) {
if (doc.cas() == 0) {
return Observable.just(doc);
}
return bucket.async().remove(doc);
}
return (doc.cas() == 0 ? bucket.async().insert(doc) : bucket.async().replace(doc));
})
);
obs.subscribe(new Subscriber<JsonDocument>() {
#Override
public void onNext(JsonDocument doc) {
docRetList.add(doc);
latch.countDown();
}
#Override
public void onCompleted() {
// Due to a bug in RxJava, onError() / retryWhen() does not intercept exceptions thrown from within the map/flatMap methods.
// Therefore, we need to recalculate the "conflicted" parcels and send them for update again.
while(latch.getCount() > 0) {
latch.countDown();
}
}
#Override
public void onError(Throwable e) {
// Same reason as above
while (latch.getCount() > 0) {
latch.countDown();
}
}
};
);
latch.await(30, TimeUnit.SECONDS);
// Recalculating remaining failed parcels and returning them for another cycle of this method (there's a loop outside)
}
I think this is indeed due to the fact that using a countdown latch doesn't signal the source that the flow of data processing should stop.
You could use more of rxjava, by using toList().timeout(30, TimeUnit.SECONDS).toBlocking().single() instead of collecting in an (un synchronized and thus unsafe) external list and of using the countdownLatch.
This will block until a List of your documents is returned.
When you create your couchbase env in code, set computationPoolSize to something large. When the Couchbase clients runs out of threads using async it just stops working, and wont ever call the callback.

Categories