From the Official Documentation of Mono#block() it is said that:
Subscribe to this Mono and block indefinitely until a next signal is received. Returns that value, or null if the Mono completes empty. In case the Mono errors, the original exception is thrown (wrapped in a RuntimeException if it was a checked exception).
So it is sure that block() method is blocking and it will not execute the next line untill block() resolved.
But my confusion is while I was using toFuture() expecting it will be non-blocking but it is behaving exactly like block method. And in the Documentation of Mono#toFuture() it is stated:
Transform this Mono into a CompletableFuture completing on onNext or onComplete and failing on onError.
Not much clear. Nowhere in this doc said Mono#toFuture() is blocking.
Please confirm me if toFuture() method blocking or non-blocking?
Also If it is non-blocking then, which thread will responsible to execute the code inside CompletableFuture?
Update: added code snippet
using Mono.block() method:
long time = System.currentTimeMillis();
String block = Mono.fromCallable(() -> {
logger.debug("inside in fromCallable() block()");
//Upstream httpcall with apache httpClient().
// which takes atleast 1sec to complete.
return "Http response as string";
}).block();
logger.info("total time needed {}", (System.currentTimeMillis()-time));
return CompletableFuture.completedFuture(block);
Using Mono.ToFuture() method:
long time = System.currentTimeMillis();
CompletableFuture<String> toFuture = Mono.fromCallable(() -> {
logger.debug("inside in fromCallable() block()");
//Upstream httpcall with apache httpClient().
// which takes atleast 1sec to complete.
return "Http response as string";
}).toFuture();
logger.info("total time needed {}", (System.currentTimeMillis()-time));
return toFuture;
these two code snippets behaves exactly same.
-- EDIT: I was wrong. mono.toFuture() doesn't block --
mono.toFuture() isn't blocking. Look at this test:
#Test
void testMonoToFuture() throws ExecutionException, InterruptedException {
System.out.println(LocalTime.now() + ": start");
Mono<String> mono = Mono.just("hello StackOverflow")
.delayElement(Duration.ofMillis(500))
.doOnNext((s) -> System.out.println(LocalTime.now() + ": mono completed"));
Future<String> future = mono.toFuture();
System.out.println(LocalTime.now() + ": future created");
String result = future.get();
System.out.println(LocalTime.now() + ": future completed");
assertThat(result).isEqualTo("hello StackOverflow");
}
This is the result:
20:18:49.557: start
20:18:49.575: future created
20:18:50.088: mono completed
20:18:50.088: future completed
The future is created almost immediately. Half a second later, the mono completes and immediately after that, the future completes. This is exactly what I would expect to happen.
So why does the mono seem blocking in the example provided in the question? It's because of the way mono.fromCallable() works. When and where does that callable actually run? mono.fromCallable() doesn't spawn an extra thread to do the work. From my tests it seems that the callable runs when you first call subscribe() or block() or something similar on the mono, and it will run in the thread that does that.
Here is a test that shows that if you create a mono with fromCallable(), subscribe will cause the callable to be executed in the main thread and even the subscribe() method will seem blocking.
#Test
void testMonoToFuture() throws ExecutionException, InterruptedException {
System.out.println(LocalTime.now() + ": start");
System.out.println("main thread: " + Thread.currentThread().getName());
Mono<String> mono = Mono.fromCallable(() -> {
System.out.println("callabel running in thread: " + Thread.currentThread().getName());
Thread.sleep(1000);
return "Hello StackOverflow";
})
.doOnNext((s) -> System.out.println(LocalTime.now() + ": mono completed"));
System.out.println("before subscribe");
mono.subscribe(System.out::println);
System.out.println(LocalTime.now() + ": after subscribe");
}
result:
20:53:37.071: start
main thread: main
before subscribe
callabel running in thread: main
20:53:38.099: mono completed
Hello StackOverflow
20:53:38.100: after subscribe
Conclusion: mono.toFuture() isn't any more blocking than mono.subscribe(). If you want to execute some piece of code asynchronously, you shouldn't be using Mono.fromCallable(). You could consider using Executors.newSingleThreadExecutor().submit(someCallable)
For reference, here is my original (wrong) answer where I belittle the mono.block() method that was assuredly written by people who know a lot more about Java and coding than I do. A personal lesson in humility, I guess.
EVERYTHING BELOW THIS IS NONSENSE
I wanted to verify exactly how this works so I wrote some tests. Unfortunately, it turns out that mono.toFuture() is indeed blocking and the result is evaluated synchronously. I honestly don't know why you would ever use this feature. The whole point of a Future is to hold the result of an asynchronous evaluation.
Here is my test:
#Test
void testMonoToFuture() throws ExecutionException, InterruptedException {
Mono<Integer> mono = Mono.fromCallable(() -> {
System.out.println("start mono");
Thread.sleep(1000);
System.out.println("mono completed");
return 0;
});
Future<Integer> future = mono.toFuture();
System.out.println("future created");
future.get();
System.out.println("future completed");
}
Result:
start mono
mono completed
future created
future completed
Here is an implementation of monoToFuture() that works the way that I would expect it to:
#Test
void testMonoToFuture() throws ExecutionException, InterruptedException {
Mono<Integer> mono = Mono.fromCallable(() -> {
System.out.println("start mono");
Thread.sleep(1000);
System.out.println("mono completed");
return 0;
});
Future<Integer> future = monoToFuture(mono, Executors.newSingleThreadExecutor());
System.out.println("future created");
future.get();
System.out.println("future completed");
}
private <T> Future<T> monoToFuture(Mono<T> mono, ExecutorService executorService){
return executorService.submit((Callable<T>) mono::block);
}
Result:
future created
start mono
mono completed
future completed
TL;DR
Mono.toFuture() is not blocking but Mono.toFuture().get() is blocking. block() is technically the same as toFuture().get() and both are blocking.
Mono.toFuture() just transforms Mono into a CompletableFuture by subscribing to it and resolving immediately. But it doesn't mean that you can access result (in your case String) of the corresponding Mono after this. CompletableFuture is still async and you can use methods like thenApply(), thenCompose(), thenCombine(), ... to continue async processing.
CompletableFuture<Double> result = getUserDetail(userId)
.toFuture()
.thenCompose(user -> getCreditRating(user));
where getUserDetail is defined as
Mono<User> getUserDetail(String userId);
Mono.toFuture is useful when you need to combine different async APIs. For example, AWS Java v2 API is async but based on CompletableFuture but we can combine APIs using Mono.toFuture or Mono.fromFuture.
Related
Here's a short code version of the problem I'm facing:
public static void main(String[] args) {
CompletableFuture.supplyAsync(() -> {
/*
try {
Thread.sleep(2000);
} catch (InterruptedException ignored) {}
*/
//System.out.println("supplyAsync: " + Thread.currentThread().getName());
return 1;
})
.thenApply(i -> {
System.out.println("apply: " + Thread.currentThread().getName());
return i + 1;
})
.thenAccept((i) -> {
System.out.println("accept: " + Thread.currentThread().getName());
System.out.println("result: " + i);
}).join();
}
This is the output that I get:
apply: main
accept: main
result: 2
I'm surprised to see main there! I expected something like this which happens when I uncomment the Thread.sleep() call or even as much as uncomment the single sysout statement there:
supplyAsync: ForkJoinPool.commonPool-worker-1
apply: ForkJoinPool.commonPool-worker-1
accept: ForkJoinPool.commonPool-worker-1
result: 2
I understand thenApplyAsync() will make sure it won't run on the main thread, but I want to avoid passing the data returned by the supplier from the thread that ran supplyAsync to the thread that's going to run thenApply and the other subsequent thens in the chain.
The method thenApply evaluates the function in the caller’s thread because the future has been completed already. Of course, when you insert a sleep into the supplier, the future has not been completed by the time, thenApply is called. Even a print statement might slow down the supplier enough to have the main thread invoke thenApply and thenAccept first. But this is not reliable behavior, you may get different results when running the code repeatedly.
Not only does the future not remember which thread completed it, there is no way to tell an arbitrary thread to execute a particular code. The thread might be busy with something else, being entirely uncooperative, or even have terminated in the meanwhile.
Just consider
ExecutorService s = Executors.newSingleThreadExecutor();
CompletableFuture<Integer> cf = CompletableFuture.supplyAsync(() -> {
System.out.println("supplyAsync: " + Thread.currentThread().getName());
return 1;
}, s);
s.shutdown();
s.awaitTermination(1, TimeUnit.DAYS);
cf.thenApply(i -> {
System.out.println("apply: " + Thread.currentThread().getName());
return i + 1;
})
.thenAccept((i) -> {
System.out.println("accept: " + Thread.currentThread().getName());
System.out.println("result: " + i);
}).join();
How could we expect the functions passed to thenApply and thenAccept to be executed in the already terminated pool’s worker thread?
We could also write
CompletableFuture<Integer> cf = new CompletableFuture<>();
Thread t = new Thread(() -> {
System.out.println("completing: " + Thread.currentThread().getName());
cf.complete(1);
});
t.start();
t.join();
System.out.println("completer: " + t.getName() + " " + t.getState());
cf.thenApply(i -> {
System.out.println("apply: " + Thread.currentThread().getName());
return i + 1;
})
.thenAccept((i) -> {
System.out.println("accept: " + Thread.currentThread().getName());
System.out.println("result: " + i);
}).join();
which will print something alike
completing: Thread-0
completer: Thread-0 TERMINATED
apply: main
accept: main
result: 2
Obviously, we can’t insist on this thread processing the subsequent stages.
But even when the thread is a still alive worker thread of a pool, it doesn’t know that it has completed a future nor has it a notion of “processing subsequent stages”. Following the Executor abstraction, it just has received an arbitrary Runnable from the queue and after processing it, it proceeds with its main loop, fetching the next Runnable from the queue.
So once the first future has been completed, the only way to tell it to do the work of completing other futures, is by enqueuing the tasks. This is what happens when using thenApplyAsync specifying the same pool or performing all actions with the …Async methods without an executor, i.e. using the default pool.
When you use a single threaded executor for all …Async methods, you can be sure that all actions are executed by the same thread, but they will still get through the pool’s queue. Since even then, it’s the main thread actually enqueuing the dependent actions in case of an already completed future, a thread safe queue and hence, synchronization overhead, is unavoidable.
But note that even if you manage to create the chain of dependent actions first, before a single worker thread processes them all sequentially, this overhead is still there. Each future’s completion is done by storing the new state in a thread safe way, making the result potentially visible to all other threads, and atomically checking whether a concurrent completion (e.g. a cancelation) has happened in the meanwhile. Then, the dependent action(s) chained by other threads will be fetched, of course, in a thread safe way, before they are executed.
All these actions with synchronization semantics make it unlikely that there are benefits of processing the data by the same thread when having a chain of dependent CompletableFutures.
The only way to have an actual local processing potentially with performance benefits is by using
CompletableFuture.runAsync(() -> {
System.out.println("supplyAsync: " + Thread.currentThread().getName());
int i = 1;
System.out.println("apply: " + Thread.currentThread().getName());
i = i + 1;
System.out.println("accept: " + Thread.currentThread().getName());
System.out.println("result: " + i);
}).join();
Or, in other words, if you don’t want detached processing, don’t create detached processing stages in the first place.
I am making multiple async calls to my database. I store all those async calls on a List<CompletableFuture<X>> list. I want to collect all the results together, so I need to wait for all of those calls to complete.
One way is to create a CompletableFuture.allOf(list.toArray(...))...
Another way is to use: list.stream.map(cf -> cf.join())...
I was just wondering if there are any advantages of creating the global CompletableFuture and waiting for it to complete (when all the individual CompletableFuture complete) over directly waiting for the individual CompletableFutures to complete.
The main thread gets blocked either way.
static CompletableFuture<Void> getFailingCF() {
return CompletableFuture.runAsync(() -> {
System.out.println("getFailingCF :: Started getFailingCF.. ");
throw new RuntimeException("getFailingCF:: Failed");
});
}
static CompletableFuture<Void> getOkCF() {
return CompletableFuture.runAsync(() -> {
System.out.println("getOkCF :: Started getOkCF.. ");
LockSupport.parkNanos(TimeUnit.SECONDS.toNanos(3));
System.out.println("getOkCF :: Completed getOkCF.. ");
});
}
public static void main(String[] args) {
List<CompletableFuture<Void>> futures = new ArrayList<>();
futures.add(getFailingCF());
futures.add(getOkCF());
// using CompletableFuture.allOf
var allOfCF = CompletableFuture.allOf(futures.toArray(new CompletableFuture[0]));
allOfCF.join();
// invoking join on individual CF
futures.stream()
.map(CompletableFuture::join)
.collect(Collectors.toList());
}
In the code snippet above, the difference lies in handling exception: The CompletableFuture.allOf(..) wraps any exception thrown by any of the CompletableFutures while allowing rest of the threads (executing the CompletableFuture) continue their execution.
The list.stream.map(cf -> cf.join())... way immediately throws the exception and terminates the app (and all threads executing the CFs in the list).
Note that invoking join() on allOf throws the wrapped exception, too. It will also terminate the app. But, by this time, unlike list.stream.map(cf -> cf.join())..., the rest of the threads have completed their processing.
allOfCF.whenComplete(..) is one of the graceful ways to handle the execution result (normal or exceptional) of all the CFs:
allOfCF.whenComplete((v, ex) -> {
System.out.println("In whenComplete...");
System.out.println("----------- Exception Status ------------");
System.out.println(" 1: " + futures.get(0).isCompletedExceptionally());
System.out.println(" 2: " + futures.get(1).isCompletedExceptionally());
});
In the list.stream.map(cf -> cf.join())... way, one needs to wrap the join() call in try/catch.
I am using Java 8, and I want to know the recommended way to enforce timeout on 3 async jobs that I would to execute async and retrieve the result from the future. Note that the timeout is the same for all 3 jobs. I also want to cancel the job if it goes beyond time limit.
I am thinking something like this:
// Submit jobs async
List<CompletableFuture<String>> futures = submitJobs(); // Uses CompletableFuture.supplyAsync
List<CompletableFuture<Void>> all = CompletableFuture.allOf(futures.toArray(new CompletableFuture[0]));
try {
allFutures.get(100L, TimeUnit.MILLISECONDS);
} catch (TimeoutException e){
for(CompletableFuture f : future) {
if(!f.isDone()) {
/*
From Java Doc:
#param mayInterruptIfRunning this value has no effect in this
* implementation because interrupts are not used to control
* processing.
*/
f.cancel(true);
}
}
}
List<String> output = new ArrayList<>();
for(CompeletableFuture fu : futures) {
if(!fu.isCancelled()) { // Is this needed?
output.add(fu.join());
}
}
return output;
Will something like this work? Is there a better way?
How to cancel the future properly? Java doc says, thread cannot be interrupted? So, if I were to cancel a future, and call join(), will I get the result immediately since the thread will not be interrupted?
Is it recommended to use join() or get() to get the result after waiting is over?
It is worth noting that calling cancel on CompletableFuture is effectively the same as calling completeExceptionally on the current stage. The cancellation will not impact prior stages. With that said:
In principle, something like this will work assuming upstream cancellation is not necessary (from a pseudocode perspective, the above has syntax errors).
CompletableFuture cancellation will not interrupt the current thread. Cancellation will cause all downstream stages to be triggered immediately with a CancellationException (will short circuit the execution flow).
'join' and 'get' are effectively the same in the case where the caller is willing to wait indefinitely. Join handles wrapping the checked Exceptions for you. If the caller wants to timeout, get will be needed.
Including a segment to illustrate the behavior on cancellation. Note how downstream processes will not be started, but upstream processes continue even after cancellation.
public static void main(String[] args) throws Exception
{
int maxSleepTime = 1000;
Random random = new Random();
AtomicInteger value = new AtomicInteger();
List<String> calculatedValues = new ArrayList<>();
Supplier<String> process = () -> { try { Thread.sleep(random.nextInt(maxSleepTime)); System.out.println("Stage 1 Running!"); } catch (InterruptedException e) { e.printStackTrace(); } return Integer.toString(value.getAndIncrement()); };
List<CompletableFuture<String>> stage1 = IntStream.range(0, 10).mapToObj(val -> CompletableFuture.supplyAsync(process)).collect(Collectors.toList());
List<CompletableFuture<String>> stage2 = stage1.stream().map(Test::appendNumber).collect(Collectors.toList());
List<CompletableFuture<String>> stage3 = stage2.stream().map(Test::printIfCancelled).collect(Collectors.toList());
CompletableFuture<Void> awaitAll = CompletableFuture.allOf(stage2.toArray(new CompletableFuture[0]));
try
{
/*Wait 1/2 the time, some should be complete. Some not complete -> TimeoutException*/
awaitAll.get(maxSleepTime / 2, TimeUnit.MILLISECONDS);
}
catch(TimeoutException ex)
{
for(CompletableFuture<String> toCancel : stage2)
{
boolean irrelevantValue = false;
if(!toCancel.isDone())
toCancel.cancel(irrelevantValue);
else
calculatedValues.add(toCancel.join());
}
}
System.out.println("All futures Cancelled! But some Stage 1's may still continue printing anyways.");
System.out.println("Values returned as of cancellation: " + calculatedValues);
Thread.sleep(maxSleepTime);
}
private static CompletableFuture<String> appendNumber(CompletableFuture<String> baseFuture)
{
return baseFuture.thenApply(val -> { System.out.println("Stage 2 Running"); return "#" + val; });
}
private static CompletableFuture<String> printIfCancelled(CompletableFuture<String> baseFuture)
{
return baseFuture.thenApply(val -> { System.out.println("Stage 3 Running!"); return val; }).exceptionally(ex -> { System.out.println("Stage 3 Cancelled!"); return ex.getMessage(); });
}
If it is necessary to cancel the upstream process (ex: cancel some network call), custom handling will be needed.
After calling cancel you cannot join the furure, since you get an exception.
One way to terminate the computation is to let it have a reference to the future and check it periodically: if it was cancelled abort the computation from inside.
This can be done if the computaion is a loop where at each iteration you can do the check.
Do you need it to be a CompletableFuture? Cause another way is to avoid to use a CompleatableFuture, and use a simple Future or a FutureTask instead: if you execute it with an Executor calling future.cancel(true) will terminate the computation if possbile.
Answerring to the question: "call join(), will I get the result immediately".
No you will not get it immediately, it will hang and wait to complete the computation: there is no way to force a computation that takes a long time to complete in a shorter time.
You can call future.complete(value) providing a value to be used as default result by other threads that have a reference to that future.
maybe I just really understand the inner workings of subscribeOn and observeOn, but I recently encountered something really odd. I was under the impression, that subscribeOn determines the Scheduler where to initially start processing (especially when we, e.g., have a lot of maps which change the stream of data) and then observeOn can be used anywhere between those maps to change Schedulers when appropriate (first do networking, then computation, finally change UI thread).
However, I noticed that when not directly chaining those calls to my Observable or Single, it won't work. Here's a minimal working Example JUnit Test:
import org.junit.Test;
import rx.Single;
import rx.schedulers.Schedulers;
public class SubscribeOnTest {
#Test public void not_working_as_expected() throws Exception {
Single<Integer> single = Single.<Integer>create(singleSubscriber -> {
System.out.println("Doing some computation on thread " + Thread.currentThread().getName());
int i = 1;
singleSubscriber.onSuccess(i);
});
single.subscribeOn(Schedulers.computation()).observeOn(Schedulers.io());
single.subscribe(integer -> {
System.out.println("Observing on thread " + Thread.currentThread().getName());
});
System.out.println("Doing test on thread " + Thread.currentThread().getName());
Thread.sleep(1000);
}
#Test public void working_as_expected() throws Exception {
Single<Integer> single = Single.<Integer>create(singleSubscriber -> {
System.out.println("Doing some computation on thread " + Thread.currentThread().getName());
int i = 1;
singleSubscriber.onSuccess(i);
}).subscribeOn(Schedulers.computation()).observeOn(Schedulers.io());
single.subscribe(integer -> {
System.out.println("Observing on thread " + Thread.currentThread().getName());
});
System.out.println("Doing test on thread " + Thread.currentThread().getName());
Thread.sleep(1000);
}
}
The test not_working_as_expected() gives me following output
Doing some computation on thread main
Observing on thread main
Doing test on thread main
whereas working_as_expected() gives me
Doing some computation on thread RxComputationScheduler-1
Doing test on thread main
Observing on thread RxIoScheduler-2
The only difference being that in the first test, after the creation of the single there is a semicolon and only then the schedulers are applied, and in the working example the method calls are directly chained to the creation of the Single. But shouldn't that be irrelevant?
All "modifications" performed by operators are immutable, meaning that they return a new stream that receives notifications in an altered manner from the previous one. Since you just called subscribeOn and observeOn operators and didn't store their result, the subscription made later is on the unaltered stream.
One side note: I didn't quite understand your definition of subscribeOn behavior. If you meant that map operators are somehow affected by it, this is not true. subscribeOn defines a Scheduler, on which the OnSubscribe function is called. In your case the function you pass to the create() method. On the other hand, observeOn defines the Scheduler on which each successive stream (streams returned by applied operators) is handling emissions coming from an upstream.
.subscribeOn(*) - returns you new instance of Observable, but in first test you just ignore that and then subscribe on original Observable, which obviously by default subscribes on default, main thread.
I don't understand how to use AsyncRestTemplate effectively for making external service calls. For the code below:
class Foo {
public void doStuff() {
Future<ResponseEntity<String>> future1 = asyncRestTemplate.getForEntity(
url1, String.class);
String response1 = future1.get();
Future<ResponseEntity<String>> future2 = asyncRestTemplate.getForEntity(
url2, String.class);
String response2 = future2.get();
Future<ResponseEntity<String>> future3 = asyncRestTemplate.getForEntity(
url3, String.class);
String response3 = future3.get();
}
}
Ideally I want to execute all 3 calls simultaneously and process the results once they're all done. However each external service call is not fetched until get() is called but get() is blocked. So doesn't that defeat the purpose of AsyncRestTemplate? I might as well use RestTemplate.
So I don't understaand how I can get them to execute simultaneously?
Simply don't call blocking get() before dispatching all of your asynchronous calls:
class Foo {
public void doStuff() {
ListenableFuture<ResponseEntity<String>> future1 = asyncRestTemplate
.getForEntity(url1, String.class);
ListenableFuture<ResponseEntity<String>> future2 = asyncRestTemplate
.getForEntity(url2, String.class);
ListenableFuture<ResponseEntity<String>> future3 = asyncRestTemplate
.getForEntity(url3, String.class);
String response1 = future1.get();
String response2 = future2.get();
String response3 = future3.get();
}
}
You can do both dispatch and get in loops, but note that current results gathering is inefficient as it would get stuck on the next unfinished future.
You could add all the futures to a collection, and iterate through it testing each future for non blocking isDone(). When that call returns true, you can then call get().
This way your en masse results gathering will be optimised rather than waiting on the next slow future result in the order of calling get()s.
Better still you can register callbacks (runtimes) within each ListenableFuture returned by AccyncRestTemplate and you don't have to worry about cyclically inspecting the potential results.
If you don't have to use 'AsyncRestTemplate' I would suggest to use RxJava instead. RxJava zip operator is what you are looking for. Check code below:
private rx.Observable<String> externalCall(String url, int delayMilliseconds) {
return rx.Observable.create(
subscriber -> {
try {
Thread.sleep(delayMilliseconds); //simulate long operation
subscriber.onNext("response(" + url + ") ");
subscriber.onCompleted();
} catch (InterruptedException e) {
subscriber.onError(e);
}
}
);
}
public void callServices() {
rx.Observable<String> call1 = externalCall("url1", 1000).subscribeOn(Schedulers.newThread());
rx.Observable<String> call2 = externalCall("url2", 4000).subscribeOn(Schedulers.newThread());
rx.Observable<String> call3 = externalCall("url3", 5000).subscribeOn(Schedulers.newThread());
rx.Observable.zip(call1, call2, call3, (resp1, resp2, resp3) -> resp1 + resp2 + resp3)
.subscribeOn(Schedulers.newThread())
.subscribe(response -> System.out.println("done with: " + response));
}
All requests to external services will be executed in separate threads, when last call will be finished transformation function( in example simple string concatenation) will be applied and result (concatenated string) will be emmited from 'zip' observable.
What I Understand by Your question is You have a predefined asynchronous method and you try to do is call this method asynchoronously using RestTemplate Class.
I have wrote a method that will help you out to call Your method asynchoronously.
public void testMyAsynchronousMethod(String... args) throws Exception {
// Start the clock
long start = System.currentTimeMillis();
// Kick of multiple, asynchronous lookups
Future<String> future1 = asyncRestTemplate
.getForEntity(url1, String.class);;
Future<String> future2 = asyncRestTemplate
.getForEntity(url2, String.class);
Future<String> future3 = asyncRestTemplate
.getForEntity(url3, String.class);
// Wait until they are all done
while (!(future1 .isDone() && future2.isDone() && future3.isDone())) {
Thread.sleep(10); //10-millisecond pause between each check
}
// Print results, including elapsed time
System.out.println("Elapsed time: " + (System.currentTimeMillis() - start));
System.out.println(future1.get());
System.out.println(future2.get());
System.out.println(future3.get());
}
You might want to use CompletableFuture class (javadoc).
Transform your calls into CompletableFuture. For instance.
final CompletableFuture<ResponseEntity<String>> cf = CompletableFuture.supplyAsync(() -> {
try {
return future.get();
} catch (InterruptedException | ExecutionException e) {
throw new RuntimeException(e);
}
});
Next call CompletableFuture::allOf method with your 3 newly created completable futures.
Call join() method on the result. After the resulting completable future is resolved you can get the results from each separate completable future you've created on step 3.
I think you are misunderstanding a few things here. When you call the getForEntity method, the requests are already fired. When the get() method of the future object is called, you are just waiting for the request to complete. So in order fire all those three requests on the same subsecond, you just have to do:
// Each of the lines below will fire an http request when it's executed
Future<ResponseEntity<String>> future1 = asyncRestTemplate.getForEntity(url1, String.class);
Future<ResponseEntity<String>> future2 = asyncRestTemplate.getForEntity(url2, String.class);
Future<ResponseEntity<String>> future3 = asyncRestTemplate.getForEntity(url3, String.class);
After all these codes are run, all the requests are already fired (most probably in the same subsecond). Then you can do whatever you want in the meanwhile. As soon as you call any of the get() method, you are waiting for each request to complete. If they are already completed, then it will just return immediately.
// do whatever you want in the meantime
// get the response of the http call and wait if it's not completed
String response1 = future1.get();
String response2 = future2.get();
String response3 = future3.get();
I don't think any of the previous answers actually achieve parallelism. The problem with #diginoise response is that it doesn't actually achieve parallelism. As soon as we call get, we're blocked. Consider that the calls are really slow such that future1 takes 3 seconds to complete, future2 2 seconds and future3 3 seconds again. With 3 get calls one after another, we end up waiting 3 + 2 + 3 = 8 seconds.
#Vikrant Kashyap answer blocks as well on while (!(future1 .isDone() && future2.isDone() && future3.isDone())). Besides the while loop is a pretty ugly looking piece of code for 3 futures, what if you have more? #lkz answer uses a different technology than you asked for, and even then, I'm not sure if zip is going to do the job. From Observable Javadoc:
zip applies this function in strict sequence, so the first item
emitted by the new Observable will be the result of the function
applied to the first item emitted by each of the source Observables;
the second item emitted by the new Observable will be the result of
the function applied to the second item emitted by each of those
Observables; and so forth.
Due to Spring's widespread popularity, they try very hard to maintain backward compatibility and in doing so, sometimes make compromises with the API. AsyncRestTemplate methods returning ListenableFuture is one such case. If they committed to Java 8+, CompletableFuture could be used instead. Why? Since we won't be dealing with thread pools directly, we don't have a good way to know when all the ListenableFutures have completed. CompletableFuture has an allOf method that creates a new CompletableFuture that is completed when all of the given CompletableFutures complete. Since we don't have that in ListenableFuture, we will have to improvise.
I've not compiled the following code but it should be clear what I'm trying to do. I'm using Java 8 because it's end of 2016.
// Lombok FTW
#RequiredArgsConstructor
public final class CounterCallback implements ListenableFutureCallback<ResponseEntity<String>> {
private final LongAdder adder;
public void onFailure(Throwable ex) {
adder.increment();
}
public void onSuccess(ResponseEntity<String> result) {
adder.increment();
}
}
ListenableFuture<ResponseEntity<String>> f1 = asyncRestTemplate
.getForEntity(url1, String.class);
f1.addCallback(//);
// more futures
LongAdder adder = new LongAdder();
ListenableFutureCallback<ResponseEntity<String>> callback = new CounterCallback(adder);
Stream.of(f1, f2, f3)
.forEach {f -> f.addCallback(callback)}
for (int counter = 1; adder.sum() < 3 && counter < 10; counter++) {
Thread.sleep(1000);
}
// either all futures are done or we're done waiting
Map<Boolean, ResponseEntity<String>> futures = Stream.of(f1, f2, f3)
.collect(Collectors.partitioningBy(Future::isDone));
Now we've a Map for which futures.get(Boolean.TRUE) will give us all the futures that have completed and futures.get(Boolean.FALSE) will give us the ones that didn't. We will want to cancel the ones that didn't complete.
This code does a few things that are important with parallel programming:
It doesn't block.
It limits the operation to some maximum allowed time.
It clearly separates successful and failure cases.