This will emit a tick every 5 seconds.
Observable.interval(5, TimeUnit.SECONDS, Schedulers.io())
.subscribe(tick -> Log.d(TAG, "tick = "+tick));
To stop it you can use
Schedulers.shutdown();
But then all the Schedulers stops and it is not possible to resume the ticking later. How can I stop and resume the emiting "gracefully"?
Here's one possible solution:
class TickHandler {
private AtomicLong lastTick = new AtomicLong(0L);
private Subscription subscription;
void resume() {
System.out.println("resumed");
subscription = Observable.interval(5, TimeUnit.SECONDS, Schedulers.io())
.map(tick -> lastTick.getAndIncrement())
.subscribe(tick -> System.out.println("tick = " + tick));
}
void stop() {
if (subscription != null && !subscription.isUnsubscribed()) {
System.out.println("stopped");
subscription.unsubscribe();
}
}
}
Some time ago, I was also looking for kind of RX "timer" solutions, but non of them met my expectations. So there you can find my own solution:
AtomicLong elapsedTime = new AtomicLong();
AtomicBoolean resumed = new AtomicBoolean();
AtomicBoolean stopped = new AtomicBoolean();
public Flowable<Long> startTimer() { //Create and starts timper
resumed.set(true);
stopped.set(false);
return Flowable.interval(1, TimeUnit.SECONDS)
.takeWhile(tick -> !stopped.get())
.filter(tick -> resumed.get())
.map(tick -> elapsedTime.addAndGet(1000));
}
public void pauseTimer() {
resumed.set(false);
}
public void resumeTimer() {
resumed.set(true);
}
public void stopTimer() {
stopped.set(true);
}
public void addToTimer(int seconds) {
elapsedTime.addAndGet(seconds * 1000);
}
val switch = new java.util.concurrent.atomic.AtomicBoolean(true)
val tick = new java.util.concurrent.atomic.AtomicLong(0L)
val suspendableObservable =
Observable.
interval(5 seconds).
takeWhile(_ => switch.get()).
repeat.
map(_ => tick.incrementAndGet())
You can set switch to false to suspend the ticking and true to resume it.
Sorry this is in RxJS instead of RxJava, but the concept will be the same. I adapted this from learn-rxjs.io and here it is on codepen.
The idea is that you start out with two streams of click events, startClick$ and stopClick$. Each click occurring on the stopClick$ stream get mapped to an empty observable, and clicks on startClick$ each get mapped to the interval$ stream. The two resulting streams get merge-d together into one observable-of-observables. In other words, a new observable of one of the two types will be emitted from merge each time there's a click. The resulting observable will go through switchMap, which starts listening to this new observable and stops listening to whatever it was listening to before. Switchmap will also start merge the values from this new observable onto its existing stream.
After the switch, scan only ever sees the "increment" value emitted by interval$, and it doesn't see any values when "stop" has been clicked.
And until the first click occurs, startWith will start emitting values from $interval, just to get things going:
const start = 0;
const increment = 1;
const delay = 1000;
const stopButton = document.getElementById('stop');
const startButton = document.getElementById('start');
const startClick$ = Rx.Observable.fromEvent(startButton, 'click');
const stopClick$ = Rx.Observable.fromEvent(stopButton, 'click');
const interval$ = Rx.Observable.interval(delay).mapTo(increment);
const setCounter = newValue => document.getElementById("counter").innerHTML = newValue;
setCounter(start);
const timer$ = Rx.Observable
// a "stop" click will emit an empty observable,
// and a "start" click will emit the interval$ observable.
// These two streams are merged into one observable.
.merge(stopClick$.mapTo(Rx.Observable.empty()),
startClick$.mapTo(interval$))
// until the first click occurs, merge will emit nothing, so
// use the interval$ to start the counter in the meantime
.startWith(interval$)
// whenever a new observable starts, stop listening to the previous
// one and start emitting values from the new one
.switchMap(val => val)
// add the increment emitted by the interval$ stream to the accumulator
.scan((acc, curr) => curr + acc, start)
// start the observable and send results to the DIV
.subscribe((x) => setCounter(x));
And here's the HTML
<html>
<body>
<div id="counter"></div>
<button id="start">
Start
</button>
<button id="stop">
Stop
</button>
</body>
</html>
Here is a another way to do this, I think.
When you check the source code, you will find interval() using class OnSubscribeTimerPeriodically. The key code below.
#Override
public void call(final Subscriber<? super Long> child) {
final Worker worker = scheduler.createWorker();
child.add(worker);
worker.schedulePeriodically(new Action0() {
long counter;
#Override
public void call() {
try {
child.onNext(counter++);
} catch (Throwable e) {
try {
worker.unsubscribe();
} finally {
Exceptions.throwOrReport(e, child);
}
}
}
}, initialDelay, period, unit);
}
So, you will see, if you wanna cannel the loop, what about throwing a new exception in onNext(). Example code below.
Observable.interval(1000, TimeUnit.MILLISECONDS)
.subscribe(new Action1<Long>() {
#Override
public void call(Long aLong) {
Log.i("abc", "onNext");
if (aLong == 5) throw new NullPointerException();
}
}, new Action1<Throwable>() {
#Override
public void call(Throwable throwable) {
Log.i("abc", "onError");
}
}, new Action0() {
#Override
public void call() {
Log.i("abc", "onCompleted");
}
});
Then you will see this:
08-08 11:10:46.008 28146-28181/net.bingyan.test I/abc: onNext
08-08 11:10:47.008 28146-28181/net.bingyan.test I/abc: onNext
08-08 11:10:48.008 28146-28181/net.bingyan.test I/abc: onNext
08-08 11:10:49.008 28146-28181/net.bingyan.test I/abc: onNext
08-08 11:10:50.008 28146-28181/net.bingyan.test I/abc: onNext
08-08 11:10:51.008 28146-28181/net.bingyan.test I/abc: onNext
08-08 11:10:51.018 28146-28181/net.bingyan.test I/abc: onError
You can use takeWhile and loop until conditions is true
Observable.interval(1, TimeUnit.SECONDS)
.takeWhile {
Log.i(TAG, " time " + it)
it != 30L
}
.subscribe(object : Observer<Long> {
override fun onComplete() {
Log.i(TAG, "onComplete " + format.format(System.currentTimeMillis()))
}
override fun onSubscribe(d: Disposable) {
Log.i(TAG, "onSubscribe " + format.format(System.currentTimeMillis()))
}
override fun onNext(t: Long) {
Log.i(TAG, "onNext " + format.format(System.currentTimeMillis()))
}
override fun onError(e: Throwable) {
Log.i(TAG, "onError")
e.printStackTrace()
}
});
#AndroidEx , that's a wonderful answer. I did it a bit differently:
private fun disposeTask() {
if (disposeable != null && !disposeable.isDisposed)
disposeable.dispose()
}
private fun runTask() {
disposeable = Observable.interval(0, 30, TimeUnit.SECONDS)
.flatMap {
apiCall.runTaskFromServer()
.map{
when(it){
is ResponseClass.Success ->{
keepRunningsaidTasks()
}
is ResponseClass.Failure ->{
disposeTask() //this will stop the task in instance of a network failure.
}
}
}
Related
I am new to vertx and async programming.
I have 2 verticles communicating via an event bus as follows:
//API Verticle
public class SearchAPIVerticle extends AbstractVerticle {
public static final String GET_USEARCH_DOCS = "get.usearch.docs";
#Autowired
private Integer defaultPort;
private void sendSearchRequest(RoutingContext routingContext) {
final JsonObject requestMessage = routingContext.getBodyAsJson();
final EventBus eventBus = vertx.eventBus();
eventBus.request(GET_USEARCH_DOCS, requestMessage, reply -> {
if (reply.succeeded()) {
Logger.info("Search Result = " + reply.result().body());
routingContext.response()
.putHeader("content-type", "application/json")
.setStatusCode(200)
.end((String) reply.result().body());
} else {
Logger.info("Document Search Request cannot be processed");
routingContext.response()
.setStatusCode(500)
.end();
}
});
}
#Override
public void start() throws Exception {
Logger.info("Starting the Gateway service (Event Sender) verticle");
// Create a Router
Router router = Router.router(vertx);
//Added bodyhandler so we can process json messages via the event bus
router.route().handler(BodyHandler.create());
// Mount the handler for incoming requests
// Find documents
router.post("/api/search/docs/*").handler(this::sendSearchRequest);
// Create an HTTP Server using default options
HttpServer server = vertx.createHttpServer();
// Handle every request using the router
server.requestHandler(router)
//start listening on port 8083
.listen(config().getInteger("http.port", 8083)).onSuccess(msg -> {
Logger.info("*************** Search Gateway Server started on "
+ server.actualPort() + " *************");
});
}
#Override
public void stop(){
//house keeping
}
}
//Below is the target verticle should be making the multiple web client call and merging the responses
.
#Component
public class SolrCloudVerticle extends AbstractVerticle {
public static final String GET_USEARCH_DOCS = "get.usearch.docs";
#Autowired
private SearchRepository searchRepositoryService;
#Override
public void start() throws Exception {
Logger.info("Starting the Solr Cloud Search Service (Event Consumer) verticle");
super.start();
ConfigStoreOptions fileStore = new ConfigStoreOptions().setType("file")
.setConfig(new JsonObject().put("path", "conf/config.json"));
ConfigRetrieverOptions configRetrieverOptions = new ConfigRetrieverOptions()
.addStore(fileStore);
ConfigRetriever configRetriever = ConfigRetriever.create(vertx, configRetrieverOptions);
configRetriever.getConfig(ar -> {
if (ar.succeeded()) {
JsonObject configJson = ar.result();
EventBus eventBus = vertx.eventBus();
eventBus.<JsonObject>consumer(GET_USEARCH_DOCS).handler(getDocumentService(searchRepositoryService, configJson));
Logger.info("Completed search service event processing");
} else {
Logger.error("Failed to retrieve the config");
}
});
}
private Handler<Message<JsonObject>> getDocumentService(SearchRepository searchRepositoryService, JsonObject configJson) {
return requestMessage -> vertx.<String>executeBlocking(future -> {
try {
//I need to incorporate the logic here that adds futures to list and composes the compositefuture
/*
//Below is my logic to populate the future list
WebClient client = WebClient.create(vertx);
List<Future> futureList = new ArrayList<>();
for (Object collection : searchRepositoryService.findAllCollections(configJson).getJsonArray(SOLR_CLOUD_COLLECTION).getList()) {
Future<String> future1 = client.post(8983, "127.0.0.1", "/solr/" + collection + "/query")
.expect(ResponsePredicate.SC_OK)
.sendJsonObject(requestMessage.body())
.map(HttpResponse::bodyAsString).recover(error -> {
System.out.println(error.getMessage());
return Future.succeededFuture();
});
futureList.add(future1);
}
//Below is the CompositeFuture logic, but the logic and construct does not make sense to me. What goes as first and second argument of executeBlocking method
/*CompositeFuture.join(futureList)
.onSuccess(result -> {
result.list().forEach( x -> {
if(x != null){
requestMessage.reply(result.result());
}
}
);
})
.onFailure(error -> {
System.out.println("We should not fail");
})
*/
future.complete("DAO returns a Json String");
} catch (Exception e) {
future.fail(e);
}
}, result -> {
if (result.succeeded()) {
requestMessage.reply(result.result());
} else {
requestMessage.reply(result.cause()
.toString());
}
});
}
}
I was able to use the org.springframework.web.reactive.function.client.WebClient calls to compose my search result from multiple web client calls, as against using Future<io.vertx.ext.web.client.WebClient> with CompositeFuture.
I was trying to avoid mixing Springboot and Vertx, but unfortunately Vertx CompositeFuture did not work here:
//This method supplies the parameter for the future.complete(..) line in getDocumentService(SearchRepository,JsonObject)
private List<JsonObject> findByQueryParamsAndDataSources(SearchRepository searchRepositoryService,
JsonObject configJson,
JsonObject requestMessage)
throws SolrServerException, IOException {
List<JsonObject> searchResultList = new ArrayList<>();
for (Object collection : searchRepositoryService.findAllCollections(configJson).getJsonArray(SOLR_CLOUD_COLLECTION).getList()) {
searchResultList.add(new JsonObject(doSearchPerCollection(collection.toString(), requestMessage.toString())));
}
return aggregateMultiCollectionSearchResults(searchResultList);
}
public String doSearchPerCollection(String collection, String message) {
org.springframework.web.reactive.function.client.WebClient client =
org.springframework.web.reactive.function.client.WebClient.create();
return client.post()
.uri("http://127.0.0.1:8983/solr/" + collection + "/query")
.contentType(MediaType.APPLICATION_JSON)
.accept(MediaType.APPLICATION_JSON)
.body(BodyInserters.fromValue(message.toString()))
.retrieve()
.bodyToMono(String.class)
.block();
}
private List<JsonObject> aggregateMultiCollectionSearchResults(List<JsonObject> searchList){
//TODO: Search result aggregation
return searchList;
}
My use case is the second verticle should make multiple vertx web client calls and should combine the responses.
If an API call falls, I want to log the error and still continue processing and merging responses from other calls.
Please, any help on how my code above could be adaptable to handle the use case?
I am looking at vertx CompositeFuture, but no headway or useful example seen yet!
What you are looking for can done with Future coordination with a little bit of additional handling:
CompositeFuture.join(future1, future2, future3).onComplete(ar -> {
if (ar.succeeded()) {
// All succeeded
} else {
// All completed and at least one failed
}
});
The join composition waits until all futures are completed, either with a success or a failure.
CompositeFuture.join
takes several futures arguments (up to 6) and returns a future that is succeeded when all the futures are succeeded, and failed when all the futures are completed and at least one of them is failed
Using join you will wait for all Futures to complete, the issue is that if one of them fails you will not be able to obtain response from others as CompositeFuture will be failed. To avoid this you should add Future<T> recover(Function<Throwable, Future<T>> mapper) on each of your Futures in which you should log the error and pass an empty response so that the future does not fail.
Here is short example:
Future<String> response1 = client.post(8887, "localhost", "work").expect(ResponsePredicate.SC_OK).send()
.map(HttpResponse::bodyAsString).recover(error -> {
System.out.println(error.getMessage());
return Future.succeededFuture();
});
Future<String> response2 = client.post(8887, "localhost", "error").expect(ResponsePredicate.SC_OK).send()
map(HttpResponse::bodyAsString).recover(error -> {
System.out.println(error.getMessage());
return Future.succeededFuture();
});
CompositeFuture.join(response2, response1)
.onSuccess(result -> {
result.list().forEach(x -> {
if(x != null) {
System.out.println(x);
}
});
})
.onFailure(error -> {
System.out.println("We should not fail");
});
Edit 1:
Limit for CompositeFuture.join(Future...) is 6 Futures, in the case you need more you can use: CompositeFuture.join(Arrays.asList(future1, future2, future3)); where you can pass unlimited number of futures.
Caller Of the method,
for (String name : controllerToPartitionModels.keySet())
{
List<PartitionModel> partitionsList = controllerToPartitionModels.get(name);
refreshPartition(partitionsList,false);
}
Method
private void refreshPartition(List<PartitionModel> partitionModels, boolean isSyncAll) {
ITModule.getITService()
.refreshPartitionStatus(new ArrayList<>(partitionModels), isSyncAll)
.subscribeOn(Schedulers.io())
.observeOn(Schedulers.io())
.subscribe(new Action() {
#Override
public void run() throws Exception {
Logger.get().d(ATTActionManager.this, "Refreshing request sent successfully for list of size : " + partitionModels.size());
}
}, (#NonNull Throwable throwable) -> {
Logger.get().d(ATTActionManager.this, "Error on Refresh request");
});
}
Problem
If there are 2 requests that has to be sent, I sometime see only one request being sent. Meaning, even though for loop is executing twice for 2 request(HTTP), I see only one request is being sent to the server.
What is that i am doing wrong here?
Rxjava version in use : 2.2.19
You can merge the above 2 methods to solve your problem by using flatMapIterable.
Merged Solution:
private void refreshPartition(Map<String, ?> controllerToPartitionModels) {
Observable.just(controllerToPartitionModels)
.map(controllerToPartitionModels -> controllerToPartitionModels.keySet())
.flatMapIterable((Function<Set<String>, Iterable<String>>) name -> name)
.map(name -> {
boolean isSyncAll = false; // You can customise as per requirement
return new Pair<List<PartitionModel>, Boolean>(controllerToPartitionModels.get(name), isSyncAll)
})
.flatMap((Function<Pair<List<PartitionModel>, Boolean>, ObservableSource<?>>) pair -> {
boolean isSyncAll = pair.first;
List<PartitionModel> partitionModels = pair.second;
return ITModule.getITService()
.refreshPartitionStatus(new ArrayList<>(partitionModels), isSyncAll)
}
)
.subscribeOn(Schedulers.io())
.observeOn(Schedulers.io())
.subscribe(new Action() {
#Override
public void run() throws Exception {
Logger.get().d(ATTActionManager.this, "Refreshing request sent successfully for list of size : " + partitionModels.size());
}
}, (#NonNull Throwable throwable) -> {
Logger.get().d(ATTActionManager.this, "Error on Refresh request");
});
}
*Kindly replace ? with the valid object type.
I created a cache with the following parameters:
cacheTempFiles = CacheBuilder.newBuilder().maximumSize(250).expireAfterWrite(15, TimeUnit.SECONDS).removalListener(new RemovalListener<String, Path>()
{
#Override
public void onRemoval(RemovalNotification<String, Path> notification)
{
deleteTemporaryFile(notification.getValue());
}
}).build();
Moreover, I'm calling every 2 minutes cacheTempFiles.cleanUp();. However, it seems that onRemoval is never called.
What is missing in my implementation?
It definitely should work, see example below:
#Test
public void shouldCallRemovalListener() {
AtomicInteger counter = new AtomicInteger();
MutableClock clock = MutableClock.epochUTC();
Ticker ticker = new Ticker() {
#Override
public long read() {
return TimeUnit.MILLISECONDS.toNanos(clock.millis());
}
};
Path tmpPath = Path.of("/tmp");
Cache<String, Path> cacheTempFiles = CacheBuilder.newBuilder()
.ticker(ticker)
.maximumSize(250)
.expireAfterWrite(15, TimeUnit.SECONDS)
.removalListener(
(RemovalNotification<String, Path> notification) ->
System.out.println(String.format(
"Delete '%s -> %s' (%d times)",
notification.getKey(), notification.getValue(), counter.incrementAndGet())))
.build();
cacheTempFiles.put("tmp", tmpPath);
assertThat(cacheTempFiles.asMap()).containsOnly(Assertions.entry("tmp", tmpPath));
assertThat(counter).hasValue(0);
clock.add(Duration.ofSeconds(20));
cacheTempFiles.cleanUp();
assertThat(cacheTempFiles.asMap()).isEmpty();
assertThat(counter).hasValue(1);
}
Passes and outputs Delete 'tmp -> /tmp' (1 times).
I need to submit a task in an async framework I'm working on, but I need to catch for exceptions, and retry the same task multiple times before "aborting".
The code I'm working with is:
int retries = 0;
public CompletableFuture<Result> executeActionAsync() {
// Execute the action async and get the future
CompletableFuture<Result> f = executeMycustomActionHere();
// If the future completes with exception:
f.exceptionally(ex -> {
retries++; // Increment the retry count
if (retries < MAX_RETRIES)
return executeActionAsync(); // <--- Submit one more time
// Abort with a null value
return null;
});
// Return the future
return f;
}
This currently doesn't compile because the return type of the lambda is wrong: it expects a Result, but the executeActionAsync returns a CompletableFuture<Result>.
How can I implement this fully async retry logic?
Chaining subsequent retries can be straight-forward:
public CompletableFuture<Result> executeActionAsync() {
CompletableFuture<Result> f=executeMycustomActionHere();
for(int i=0; i<MAX_RETRIES; i++) {
f=f.exceptionally(t -> executeMycustomActionHere().join());
}
return f;
}
Read about the drawbacks below
This simply chains as many retries as intended, as these subsequent stages won’t do anything in the non-exceptional case.
One drawback is that if the first attempt fails immediately, so that f is already completed exceptionally when the first exceptionally handler is chained, the action will be invoked by the calling thread, removing the asynchronous nature of the request entirely. And generally, join() may block a thread (the default executor will start a new compensation thread then, but still, it’s discouraged). Unfortunately, there is neither, an exceptionallyAsync or an exceptionallyCompose method.
A solution not invoking join() would be
public CompletableFuture<Result> executeActionAsync() {
CompletableFuture<Result> f=executeMycustomActionHere();
for(int i=0; i<MAX_RETRIES; i++) {
f=f.thenApply(CompletableFuture::completedFuture)
.exceptionally(t -> executeMycustomActionHere())
.thenCompose(Function.identity());
}
return f;
}
demonstrating how involved combining “compose” and an “exceptionally” handler is.
Further, only the last exception will be reported, if all retries failed. A better solution should report the first exception, with subsequent exceptions of the retries added as suppressed exceptions. Such a solution can be build by chaining a recursive call, as hinted by Gili’s answer, however, in order to use this idea for exception handling, we have to use the steps to combine “compose” and “exceptionally” shown above:
public CompletableFuture<Result> executeActionAsync() {
return executeMycustomActionHere()
.thenApply(CompletableFuture::completedFuture)
.exceptionally(t -> retry(t, 0))
.thenCompose(Function.identity());
}
private CompletableFuture<Result> retry(Throwable first, int retry) {
if(retry >= MAX_RETRIES) return CompletableFuture.failedFuture(first);
return executeMycustomActionHere()
.thenApply(CompletableFuture::completedFuture)
.exceptionally(t -> { first.addSuppressed(t); return retry(first, retry+1); })
.thenCompose(Function.identity());
}
CompletableFuture.failedFuture is a Java 9 method, but it would be trivial to add a Java 8 compatible backport to your code if needed:
public static <T> CompletableFuture<T> failedFuture(Throwable t) {
final CompletableFuture<T> cf = new CompletableFuture<>();
cf.completeExceptionally(t);
return cf;
}
Instead of implementing your own retry logic, I recommend using a proven library like failsafe, which has built-in support for futures (and seems more popular than guava-retrying). For your example, it would look something like:
private static RetryPolicy retryPolicy = new RetryPolicy()
.withMaxRetries(MAX_RETRIES);
public CompletableFuture<Result> executeActionAsync() {
return Failsafe.with(retryPolicy)
.with(executor)
.withFallback(null)
.future(this::executeMycustomActionHere);
}
Probably you should avoid .withFallback(null) and just have let the returned future's .get() method throw the resulting exception so the caller of your method can handle it specifically, but that's a design decision you'll have to make.
Other things to think about include whether you should retry immediately or wait some period of time between attempts, any sort of recursive backoff (useful when you're calling a web service that might be down), and whether there are specific exceptions that aren't worth retrying (e.g. if the parameters to the method are invalid).
I think I was successfully. Here's an example class I created and the test code:
RetriableTask.java
public class RetriableTask
{
protected static final int MAX_RETRIES = 10;
protected int retries = 0;
protected int n = 0;
protected CompletableFuture<Integer> future = new CompletableFuture<Integer>();
public RetriableTask(int number) {
n = number;
}
public CompletableFuture<Integer> executeAsync() {
// Create a failure within variable timeout
Duration timeoutInMilliseconds = Duration.ofMillis(1*(int)Math.pow(2, retries));
CompletableFuture<Integer> timeoutFuture = Utils.failAfter(timeoutInMilliseconds);
// Create a dummy future and complete only if (n > 5 && retries > 5) so we can test for both completion and timeouts.
// In real application this should be a real future
final CompletableFuture<Integer> taskFuture = new CompletableFuture<>();
if (n > 5 && retries > 5)
taskFuture.complete(retries * n);
// Attach the failure future to the task future, and perform a check on completion
taskFuture.applyToEither(timeoutFuture, Function.identity())
.whenCompleteAsync((result, exception) -> {
if (exception == null) {
future.complete(result);
} else {
retries++;
if (retries >= MAX_RETRIES) {
future.completeExceptionally(exception);
} else {
executeAsync();
}
}
});
// Return the future
return future;
}
}
Usage
int size = 10;
System.out.println("generating...");
List<RetriableTask> tasks = new ArrayList<>();
for (int i = 0; i < size; i++) {
tasks.add(new RetriableTask(i));
}
System.out.println("issuing...");
List<CompletableFuture<Integer>> futures = new ArrayList<>();
for (int i = 0; i < size; i++) {
futures.add(tasks.get(i).executeAsync());
}
System.out.println("Waiting...");
for (int i = 0; i < size; i++) {
try {
CompletableFuture<Integer> future = futures.get(i);
int result = future.get();
System.out.println(i + " result is " + result);
} catch (Exception ex) {
System.out.println(i + " I got exception!");
}
}
System.out.println("Done waiting...");
Output
generating...
issuing...
Waiting...
0 I got exception!
1 I got exception!
2 I got exception!
3 I got exception!
4 I got exception!
5 I got exception!
6 result is 36
7 result is 42
8 result is 48
9 result is 54
Done waiting...
Main idea and some glue code (failAfter function) come from here.
Any other suggestions or improvement are welcome.
util class:
public class RetryUtil {
public static <R> CompletableFuture<R> retry(Supplier<CompletableFuture<R>> supplier, int maxRetries) {
CompletableFuture<R> f = supplier.get();
for(int i=0; i<maxRetries; i++) {
f=f.thenApply(CompletableFuture::completedFuture)
.exceptionally(t -> {
System.out.println("retry for: "+t.getMessage());
return supplier.get();
})
.thenCompose(Function.identity());
}
return f;
}
}
usage:
public CompletableFuture<String> lucky(){
return CompletableFuture.supplyAsync(()->{
double luckNum = Math.random();
double luckEnough = 0.6;
if(luckNum < luckEnough){
throw new RuntimeException("not luck enough: " + luckNum);
}
return "I'm lucky: "+luckNum;
});
}
#Test
public void testRetry(){
CompletableFuture<String> retry = RetryUtil.retry(this::lucky, 10);
System.out.println("async check");
String join = retry.join();
System.out.println("lucky? "+join);
}
output
async check
retry for: java.lang.RuntimeException: not luck enough: 0.412296354211683
retry for: java.lang.RuntimeException: not luck enough: 0.4099777199676573
lucky? I'm lucky: 0.8059089479049389
I recently solved a similar problem using the guava-retrying library.
Callable<Result> callable = new Callable<Result>() {
public Result call() throws Exception {
return executeMycustomActionHere();
}
};
Retryer<Boolean> retryer = RetryerBuilder.<Result>newBuilder()
.retryIfResult(Predicates.<Result>isNull())
.retryIfExceptionOfType(IOException.class)
.retryIfRuntimeException()
.withStopStrategy(StopStrategies.stopAfterAttempt(MAX_RETRIES))
.build();
CompletableFuture.supplyAsync( () -> {
try {
retryer.call(callable);
} catch (RetryException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
});
Here is an approach that will work for any CompletionStage subclass and does not return a dummy CompletableFuture that does nothing more than wait to get updated by other futures.
/**
* Sends a request that may run as many times as necessary.
*
* #param request a supplier initiates an HTTP request
* #param executor the Executor used to run the request
* #return the server response
*/
public CompletionStage<Response> asyncRequest(Supplier<CompletionStage<Response>> request, Executor executor)
{
return retry(request, executor, 0);
}
/**
* Sends a request that may run as many times as necessary.
*
* #param request a supplier initiates an HTTP request
* #param executor the Executor used to run the request
* #param tries the number of times the operation has been retried
* #return the server response
*/
private CompletionStage<Response> retry(Supplier<CompletionStage<Response>> request, Executor executor, int tries)
{
if (tries >= MAX_RETRIES)
throw new CompletionException(new IOException("Request failed after " + MAX_RETRIES + " tries"));
return request.get().thenComposeAsync(response ->
{
if (response.getStatusInfo().getFamily() != Response.Status.Family.SUCCESSFUL)
return retry(request, executor, tries + 1);
return CompletableFuture.completedFuture(response);
}, executor);
}
maybe it's late but hopes someone might find this useful, I recently solved this problem for retrying rest API call on failure. In my case, I have to retry on 500 HTTP status code, below is my rest client code (we are using WSClient from play framework) you can change it to whatever rest-client as per requirement.
int MAX_RETRY = 3;
CompletableFuture<WSResponse> future = new CompletableFuture<>();
private CompletionStage<WSResponse> getWS(Object request,String url, int retry, CompletableFuture future) throws JsonProcessingException {
ws.url(url)
.post(Json.parse(mapper.writeValueAsString(request)))
.whenCompleteAsync((wsResponse, exception) -> {
if(wsResponse.getStatus() == 500 && retry < MAX_RETRY) {
try {
getWS(request, retry+1, future);
} catch (IOException e) {
throw new Exception(e);
}
}else {
future.complete(wsResponse);
}
});
return future;
}
This code will return immediately if the status code is 200 or other than 500 whereas if HTTP status is 500 it will retry 3 times.
Inspired by theazureshadow's answer. His or her answer was great but doesn't work with new version of FailSafe. The below code works with
<dependency>
<groupId>dev.failsafe</groupId>
<artifactId>failsafe</artifactId>
<version>3.3.0</version>
</dependency>
solution:
RetryPolicy<Object> retryPolicy = RetryPolicy.builder()
.withMaxRetries(MAX_RETRY)
.withBackoff(INITIAL_DELAY, MAX_DELAY, ChronoUnit.SECONDS)
.build();
Fallback<Object> fallback = Fallback.of((AuditEvent) null);
public CompletableFuture<Object> executeAsync(Runnable asyncTask) {
return Failsafe.with(fallback)
.compose(retryPolicy)
.with(executorService)
.onFailure(e -> LOG.error(e.getException().getMessage()))
.getAsync(() -> asyncTask());
}
We needed to retry a task based on an error condition.
public static <T> CompletableFuture<T> retryOnCondition(Supplier<CompletableFuture<T>> supplier,
Predicate<Throwable> retryPredicate, int maxAttempts) {
if (maxAttempts <= 0) {
throw new IllegalArgumentException("maxAttempts can't be <= 0");
}
return retryOnCondition(supplier, retryPredicate, null, maxAttempts);
}
private static <T> CompletableFuture<T> retryOnCondition(
Supplier<CompletableFuture<T>> supplier, Predicate<Throwable> retryPredicate,
Throwable lastError, int attemptsLeft) {
if (attemptsLeft == 0) {
return CompletableFuture.failedFuture(lastError);
}
return supplier.get()
.thenApply(CompletableFuture::completedFuture)
.exceptionally(error -> {
boolean doRetry = retryPredicate.test(error);
int attempts = doRetry ? attemptsLeft - 1 : 0;
return retryOnCondition(supplier, retryPredicate, error, attempts);
})
.thenCompose(Function.identity());
}
Usage:
public static void main(String[] args) {
retryOnCondition(() -> myTask(), e -> {
//log exception
return e instanceof MyException;
}, 3).join();
}
I would suggest using resilience4j for this use case. It's very handy!!
Introduction: resilience4j-retry and its Javadoc: Retry
They have method to decorate completionStage directly as below:
default <T> java.util.concurrent.CompletionStage<T> executeCompletionStage(java.util.concurrent.ScheduledExecutorService scheduler,
java.util.function.Supplier<java.util.concurrent.CompletionStage<T>> supplier)
I am having a lot of trouble understanding the zip operator in RxJava for my android project.
Problem
I need to be able to send a network request to upload a video
Then i need to send a network request to upload a picture to go with it
finally i need to add a description and use the responses from the previous two requests to upload the location urls of the video and picture along with the description to my server.
I assumed that the zip operator would be perfect for this task as I understood we could take the response of two observables (video and picture requests) and use them for my final task.
But I cant seem to get this to occur how I envision it.
I am looking for someone to answer how this can be done conceptually with a bit of psuedo code.
Thank you
Zip operator strictly pairs emitted items from observables. It waits for both (or more) items to arrive then merges them. So yes this would be suitable for your needs.
I would use Func2 to chain the result from the first two observables.
Notice this approach would be simpler if you use Retrofit since its api interface may return an observable. Otherwise you would need to create your own observable.
// assuming each observable returns response in the form of String
Observable<String> movOb = Observable.create(...);
// if you use Retrofit
Observable<String> picOb = RetrofitApiManager.getService().uploadPic(...),
Observable.zip(movOb, picOb, new Func2<String, String, MyResult>() {
#Override
public MyResult call(String movieUploadResponse, String picUploadResponse) {
// analyze both responses, upload them to another server
// and return this method with a MyResult type
return myResult;
}
}
)
// continue chaining this observable with subscriber
// or use it for something else
A small example:
val observableOne = Observable.just("Hello", "World")
val observableTwo = Observable.just("Bye", "Friends")
val zipper = BiFunction<String, String, String> { first, second -> "$first - $second" }
Observable.zip(observableOne, observableTwo, zipper)
.subscribe { println(it) }
This will print:
Hello - Bye
World - Friends
In BiFunction<String, String, String> the first String the type of the first observable, the second String is the type of the second observable, the third String represents the type of the return of your zipper function.
I made a small example that calls two real endpoints using zip in this blog post
Here I have an example that I did using Zip in asynchronous way, just in case you´re curious
/**
* Since every observable into the zip is created to subscribeOn a diferent thread, it´s means all of them will run in parallel.
* By default Rx is not async, only if you explicitly use subscribeOn.
*/
#Test
public void testAsyncZip() {
scheduler = Schedulers.newThread();
scheduler1 = Schedulers.newThread();
scheduler2 = Schedulers.newThread();
long start = System.currentTimeMillis();
Observable.zip(obAsyncString(), obAsyncString1(), obAsyncString2(), (s, s2, s3) -> s.concat(s2)
.concat(s3))
.subscribe(result -> showResult("Async in:", start, result));
}
/**
* In this example the the three observables will be emitted sequentially and the three items will be passed to the pipeline
*/
#Test
public void testZip() {
long start = System.currentTimeMillis();
Observable.zip(obString(), obString1(), obString2(), (s, s2, s3) -> s.concat(s2)
.concat(s3))
.subscribe(result -> showResult("Sync in:", start, result));
}
public void showResult(String transactionType, long start, String result) {
System.out.println(result + " " +
transactionType + String.valueOf(System.currentTimeMillis() - start));
}
public Observable<String> obString() {
return Observable.just("")
.doOnNext(val -> {
System.out.println("Thread " + Thread.currentThread()
.getName());
})
.map(val -> "Hello");
}
public Observable<String> obString1() {
return Observable.just("")
.doOnNext(val -> {
System.out.println("Thread " + Thread.currentThread()
.getName());
})
.map(val -> " World");
}
public Observable<String> obString2() {
return Observable.just("")
.doOnNext(val -> {
System.out.println("Thread " + Thread.currentThread()
.getName());
})
.map(val -> "!");
}
public Observable<String> obAsyncString() {
return Observable.just("")
.observeOn(scheduler)
.doOnNext(val -> {
System.out.println("Thread " + Thread.currentThread()
.getName());
})
.map(val -> "Hello");
}
public Observable<String> obAsyncString1() {
return Observable.just("")
.observeOn(scheduler1)
.doOnNext(val -> {
System.out.println("Thread " + Thread.currentThread()
.getName());
})
.map(val -> " World");
}
public Observable<String> obAsyncString2() {
return Observable.just("")
.observeOn(scheduler2)
.doOnNext(val -> {
System.out.println("Thread " + Thread.currentThread()
.getName());
})
.map(val -> "!");
}
You can see more examples here https://github.com/politrons/reactive
zip operator allow you to compose a result from results of two different observable.
You 'll have to give am lambda that will create a result from datas emitted by each observable.
Observable<MovieResponse> movies = ...
Observable<PictureResponse> picture = ...
Observable<Response> response = movies.zipWith(picture, (movie, pic) -> {
return new Response("description", movie.getName(), pic.getUrl());
});
i have been searching for a simple answer on how to use the Zip operator, and what to do with the Observables i create to pass them to it, i was wondering if i should call subscribe() for every observable or not, non of these answers were simple to find, i had to figure it out by my self, so here is a simple example for using Zip operator on 2 Observables :
#Test
public void zipOperator() throws Exception {
List<Integer> indexes = Arrays.asList(0, 1, 2, 3, 4);
List<String> letters = Arrays.asList("a", "b", "c", "d", "e");
Observable<Integer> indexesObservable = Observable.fromIterable(indexes);
Observable<String> lettersObservable = Observable.fromIterable(letters);
Observable.zip(indexesObservable, lettersObservable, mergeEmittedItems())
.subscribe(printMergedItems());
}
#NonNull
private BiFunction<Integer, String, String> mergeEmittedItems() {
return new BiFunction<Integer, String, String>() {
#Override
public String apply(Integer index, String letter) throws Exception {
return "[" + index + "] " + letter;
}
};
}
#NonNull
private Consumer<String> printMergedItems() {
return new Consumer<String>() {
#Override
public void accept(String s) throws Exception {
System.out.println(s);
}
};
}
the printed result is :
[0] a
[1] b
[2] c
[3] d
[4] e
the final answers to the questions that where in my head were as follows
the Observables passed to the zip() method just need to be created only, they do not need to have any subscribers to them, only creating them is enough ... if you want any observable to run on a scheduler, you can specify this for that Observable ... i also tried the zip() operator on Observables where they should wait for there result, and the Consumable of the zip() was triggered only when both results where ready (which is the expected behavior)
This is my implementation using Single.zip and rxJava2
I tried to make it as easy to understand as possible
//
// API Client Interface
//
#GET(ServicesConstants.API_PREFIX + "questions/{id}/")
Single<Response<ResponseGeneric<List<ResponseQuestion>>>> getBaseQuestions(#Path("id") int personId);
#GET(ServicesConstants.API_PREFIX + "physician/{id}/")
Single<Response<ResponseGeneric<List<ResponsePhysician>>>> getPhysicianInfo(#Path("id") int personId);
//
// API middle layer - NOTE: I had feedback that the Single.create is not needed (but I haven't yet spent the time to improve it)
//
public Single<List<ResponsePhysician>> getPhysicianInfo(int personId) {
return Single.create(subscriber -> {
apiClient.getPhysicianInfo(appId)
.subscribeOn(Schedulers.io())
.observeOn(Schedulers.io())
.subscribe(response -> {
ResponseGeneric<List<ResponsePhysician>> responseBody = response.body();
if(responseBody != null && responseBody.statusCode == 1) {
if (!subscriber.isDisposed()) subscriber.onSuccess(responseBody.data);
} else if(response.body() != null && response.body().status != null ){
if (!subscriber.isDisposed()) subscriber.onError(new Throwable(response.body().status));
} else {
if (!subscriber.isDisposed()) subscriber.onError(new Throwable(response.message()));
}
}, throwable -> {
throwable.printStackTrace();
if(!subscriber.isDisposed()) subscriber.onError(throwable);
});
});
}
public Single<List<ResponseQuestion>> getHealthQuestions(int personId){
return Single.create(subscriber -> {
apiClient.getBaseQuestions(personId)
.subscribeOn(Schedulers.io())
.observeOn(Schedulers.io())
.subscribe(response -> {
ResponseGeneric<List<ResponseQuestion>> responseBody = response.body();
if(responseBody != null && responseBody.data != null) {
if (!subscriber.isDisposed()) subscriber.onSuccess(response.body().data);
} else if(response.body() != null && response.body().status != null ){
if (!subscriber.isDisposed()) subscriber.onError(new Throwable(response.body().status));
} else {
if (!subscriber.isDisposed()) subscriber.onError(new Throwable(response.message()));
}
}, throwable -> {
throwable.printStackTrace();
if(!subscriber.isDisposed()) subscriber.onError(throwable);
});
});
}
//please note that ResponseGeneric is just an outer wrapper of the returned data - common to all API's in this project
public class ResponseGeneric<T> {
#SerializedName("Status")
public String status;
#SerializedName("StatusCode")
public float statusCode;
#SerializedName("Data")
public T data;
}
//
// API end-use layer - this gets close to the UI so notice the oberver is set for main thread
//
private static class MergedResponse{// this is just a POJO to store all the responses in one object
public List<ResponseQuestion> listQuestions;
public List<ResponsePhysician> listPhysicians;
public MergedResponse(List<ResponseQuestion> listQuestions, List<ResponsePhysician> listPhysicians){
this.listQuestions = listQuestions;
this.listPhysicians = listPhysicians;
}
}
// example of Single.zip() - calls getHealthQuestions() and getPhysicianInfo() from API Middle Layer
private void downloadHealthQuestions(int personId) {
addRxSubscription(Single
.zip(getHealthQuestions(personId), getPhysicianInfo(personId), MergedResponse::new)
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe(response -> {
if(response != null) {
Timber.i(" - total health questions downloaded %d", response.listQuestions.size());
Timber.i(" - physicians downloaded %d", response.listPhysicians.size());
if (response.listPhysicians != null && response.listPhysicians.size()>0) {
// do your stuff to process response data
}
if (response.listQuestions != null && response.listQuestions.size()>0) {
// do your stuff to process response data
}
} else {
// process error - show message
}
}, error -> {
// process error - show network error message
}));
}
You use the zip from rxjava with Java 8:
Observable<MovieResponse> movies = ...
Observable<PictureResponse> picture = ...
Observable<ZipResponse> response = Observable.zip(movies, picture, ZipResponse::new);
class ZipResponse {
private MovieResponse movieResponse;
private PictureResponse pictureResponse;
ZipResponse(MovieResponse movieResponse, PictureResponse pictureResponse) {
this.movieResponse = movieResponse;
this.pictureResponse = pictureResponse;
}
public MovieResponse getMovieResponse() {
return movieResponse;
}
public void setMovieResponse(MovieResponse movieResponse) {
this.movieResponse= movieResponse;
}
public PictureResponse getPictureResponse() {
return pictureResponse;
}
public void setPictureResponse(PictureResponse pictureResponse) {
this.pictureResponse= pictureResponse;
}
}
You can use .zipWith operator for Observable chains.
If uploadMovies() and uploadPictures() return Observable,
uploadMovies()
.zipWith(uploadPictures()) { m, p ->
"$m with $p were uploaded"
}
.subscribe { print(it) }