Wait for multiple RxJava observable's to complete - java

I'm using RxJava in a web service to create multiple observables, each that do something during it's onNext and onError functions. I want to wait until all of these observables finish completing its work before returning a response to the client, but I also want these tasks to be executed in parallel
I've merged all of these using:
Observable<List<Obj>> mergedObs = Observable.mergeDelayError(tasks)
I then convert this into a blocking observable via
mergedObs.toList().toBlocking.subscribe(...)
However, I've noticed that while the merged observable waits for all tasks to call subscribe(), it does not wait for the tasks within the subscribe function to complete. Here's some sample code with unimportant details omitted:
public void doWork() {
Observable<Obj> mergedTasks = getTasks();
mergedTasks.toList().toBlocking().subscribe(
results -> LOG.info("Done!)
);
return; // eventually returns a web response
}
private Observable<Obj> getTasks() {
List<Observable<Obj>> tasks = new ArrayList<>();
Observable<Obj> task1 = getTask1();
Observable<Obj> task2 = getTask2();
tasks.add(task1);
tasks.add(task2);
// Both tasks execute in parallel
tasks.toList().subscribe(
result -> processSearch(), // this can execute after the service returns a response!
exception -> handleException() // this can execute after the service returns a response!
);
return Observable.mergeDelayError(tasks);
}
private Observable<Obj> getTask1() {
// implementation detail
}
private Observable<Obj> getTask2() {
// implementation detail
}
While this code does wait for both tasks to call subscribe(), it doesn't wait for the work performed within the onSuccess or onError.

Related

Concurrently call several Spring microservice URLs

I have a Spring Boot application that will call several microservice URLs using the GET method. These microservice URL endpoints are all implemented as #RestControllers. They don't return Flux or Mono.
I need my application to capture which URLs are not returning 2xx HTTP status.
I'm currently using the following code to do this:
List<String> failedServiceUrls = new ArrayList<>();
for (String serviceUrl : serviceUrls.getServiceUrls()) {
try {
ResponseEntity<String> response = rest.getForEntity(serviceUrl, String.class);
if (!response.getStatusCode().is2xxSuccessful()) {
failedServiceUrls.add(serviceUrl);
}
} catch (Exception e){
failedServiceUrls.add(serviceUrl);
}
}
// all checks are complete so send email with the failedServiceUrls.
mail.sendEmail("Service Check Complete", failedServiceUrls);
}
The problem is that each URL call is slow to respond and I have to wait for one URL call to complete prior to making the next one.
How can I change this to make the URLs calls be made concurrently? After all call have completed, I need to send an email with any URLs that have an error that should be collected in failedServiceUrls.
Update
I revised the above post to state that I just want the calls to be made concurrently. I don't care that rest.getForEntity call blocks.
Using the executor service in your code, you can call all microservices in parallel this way:
// synchronised it as per Maciej's comment:
failedServiceUrls = Collections.synchronizedList(failedServiceUrls);
ExecutorService executorService = Executors.newFixedThreadPool(serviceUrls.getServiceUrls().size());
List<Callable<String>> runnables = new ArrayList<>().stream().map(o -> new Callable<String>() {
#Override
public String call() throws Exception {
ResponseEntity<String> response = rest.getForEntity(serviceUrl, String.class);
// do something with the response
if (!response.getStatusCode().is2xxSuccessful()) {
failedServiceUrls.add(serviceUrl);
}
return response.getBody();
}
}).collect(toList());
List<Future<String>> result = executorService.invokeAll(runnables);
for(Future f : result) {
String resultFromService = f.get(); // blocker, it will wait until the execution is over
}
If you just want to make calls concurrently and you don't care about blocking threads you can:
wrap the blocking service call using Mono#fromCallable
transform serviceUrls.getServiceUrls() into a reactive stream using Flux#fromIterable
Concurrently call and filter failed services with Flux#filterWhen using Flux from 2. and asynchronous service call from 1.
Wait for all calls to complete using Flux#collectList and send email with invalid urls in subscribe
void sendFailedUrls() {
Flux.fromIterable(erviceUrls.getServiceUrls())
.filterWhen(url -> responseFailed(url))
.collectList()
.subscribe(failedURls -> mail.sendEmail("Service Check Complete", failedURls));
}
Mono<Boolean> responseFailed(String url) {
return Mono.fromCallable(() -> rest.getForEntity(url, String.class))
.map(response -> !response.getStatusCode().is2xxSuccessful())
.subscribeOn(Schedulers.boundedElastic());
}
Blocking calls with Reactor
Since the underlying service call is blocking it should be executed on a dedicated thread pool. Size of this thread pool should be equal to the number of concurrent calls if you want to achieve full concurrency. That's why we need .subscribeOn(Schedulers.boundedElastic())
See: https://projectreactor.io/docs/core/release/reference/#faq.wrap-blocking
Better solution using WebClient
Note however, that blocking calls should be avoided when using reactor and spring webflux. The correct way to do this would be to replace RestTemplate with WebClient from Spring 5 which is fully non-blocking.
See: https://docs.spring.io/spring-boot/docs/2.0.3.RELEASE/reference/html/boot-features-webclient.html

How to declare and join list of completable future responses in java

The flow goes controller layer -> Service layer
Here I'm calling processLOBTransactions (Async method) method from the controller layer
How can I join all CompletableFuture responses in the controller layer? my requirement is after the execution of the processLOBTransactions for each list element from the controller, in the controller layer I wanna write logs kind of thing
Could anyone please give any suggestions on how to achieve this?
**Controller Layer:**
class ControllerLayer{
pktransaction.getLineOfBusinessTransaction().stream().forEach((lob) -> {
CompletableFuture<Boolean> futureResponse = flPbtService.processLOBTransactions(lob);
});
***//HERE How can i join all CompletableFuture Responses,and i wanna print logs like all thread completed***
}
**Service layer:**
class ServiceLayer{
#Async("threadPoolTaskExecutor")
public CompletableFuture<Boolean> processLOBTransactions(LineOfBusinessTransaction lobObj) {
// Doing some business logic and returning the CompletableFuture as Response
return CompletableFuture.completedFuture(new Boolean(true));
}
}
All the CompletableFuture<Boolean> futureResponse objects inside the forEach has to be stored in the Collection.
Assuming parallelStream() will not be used, ArrayList can be used to gather these references
Outside the loop, these references can be iterated and get() obtain the result of exception
get() might wait until the actual task completes, so it may be better to use get(timeout, Unit)` to have a deterministic SLA contracts
get can throw exception and be sure handle appropriate actions by catching the exception
if the get with timeout, could not complete within timeout, then you can request cancel if its not a high priority operation assuming the underlying task does not consume the interruption. (it a business logic)
ArrayList<CompletableFuture<Boolean>> futures = new ArrayList<>();
IntStream.range(0, 10).forEach((lob) -> {
CompletableFuture<Boolean> futureResponse = CompletableFuture.completedFuture(new Boolean(true));
futures.add(futureResponse);
});
for (CompletableFuture<Boolean> future : futures) {
try {
System.out.println(future.get());
// or future.get(1, TimeUnit.SECONDS)
} catch (InterruptedException | ExecutionException e) {
System.out.println(e.getMessage());
//future.cancel(true); // if need to cancel the underlying task, assuming the task listens
}
}

Secure and effective way for waiting for asynchronous task

In the system, I have an object - let's call it TaskProcessor. It holds queue of tasks, which are executed by some pool of threads (ExecutorService + PriorityBlockingQueue)
The result of each task is saved in the database under some unique identifier.
The user, who knows this unique identifier, may check the result of this task. The result could be in the database, but also the task could still wait in the queue for execution. In that case, UserThread should wait until the task will be finished.
Additionally, the following assumptions are valid:
Someone else could enqueue the task to TaskProcessor and some random UserThread can access the result if he knows the unique identifier.
UserThread and TaskProcess are in the same app. TaskProcessor contains a pool of threads, and UserThread is simply servlet Thread.
UserThread should be blocked when asking for the result, and the result is not completed yet. UserThread should be unblocked immediately after TaskProcessor complete task (or tasks) grouped by a unique identifier
My first attempt (the naive one), was to check the result in the loop and sleep for some time:
// UserThread
while(!checkResultIsInDatabase(uniqueIdentifier))
sleep(someTime)
But I don't like it. First of all, I am wasting database connections. Moreover, if the task would be finished right after sleep, then the user will wait even if the result just appeared.
Next attempt was based on wait/notify:
//UserThread
while (!checkResultIsInDatabase())
taskProcessor.wait()
//TaskProcessor
... some complicated calculations
this.notifyAll()
But I don't like it either. If more UserThreads will use TaskProcessor, then they will be wakened up unnecessarily every time some task would be completed and moreover - they will make unnecessary database calls.
The last attempt was based on something which I called waitingRoom:
//UserThread
Object mutex = new Object();
taskProcessor.addToWaitingRoom(uniqueIdentifier, mutex)
while (!checkResultIsInDatabase())
mutex.wait()
//TaskProcessor
... Some complicated calculations
if (uniqueIdentifierExistInWaitingRoom(taskUniqueIdentifier))
getMutexFromWaitingRoom(taskUniqueIdentifier).notify()
But it seems to be not secure. Between database check and wait(), the task could be completed (notify() wouldn't be effective because UserThread didn't invoke wait() yet), which may end up with deadlock.
It seems, that I should synchronize it somewhere. But I am afraid that it will be not effective.
Is there a way to correct any of my attempts, to make them secure and effective? Or maybe there is some other, better way to do this?
You seem to be looking for some sort of future / promise abstraction. Take a look at CompletableFuture, available since Java 8.
CompletableFuture<Void> future = CompletableFuture.runAsync(db::yourExpensiveOperation, executor);
// best approach: attach some callback to run when the future is complete, and handle any errors
future.thenRun(this::onSuccess)
.exceptionally(ex -> logger.error("err", ex));
// if you really need the current thread to block, waiting for the async result:
future.join(); // blocking! returns the result when complete or throws a CompletionException on error
You can also return a (meaningful) value from your async operation and pass the result to the callback. To make use of this, take a look at supplyAsync(), thenAccept(), thenApply(), whenComplete() and the like.
You can also combine multiple futures into one and a lot more.
I believe replacing of mutex with CountDownLatch in waitingRoom approach prevents deadlock.
CountDownLatch latch = new CountDownLatch(1)
taskProcessor.addToWaitingRoom(uniqueIdentifier, latch)
while (!checkResultIsInDatabase())
// consider timed version
latch.await()
//TaskProcessor
... Some complicated calculations
if (uniqueIdentifierExistInWaitingRoom(taskUniqueIdentifier))
getLatchFromWaitingRoom(taskUniqueIdentifier).countDown()
With CompletableFuture and a ConcurrentHashMap you can achieve it:
/* Server class, i.e. your TaskProcessor */
// Map of queued tasks (either pending or ongoing)
private static final ConcurrentHashMap<String, CompletableFuture<YourTaskResult>> tasks = new ConcurrentHashMap<>();
// Launch method. By default, CompletableFuture uses ForkJoinPool which implicitly enqueues tasks.
private CompletableFuture<YourTaskResult> launchTask(final String taskId) {
return tasks.computeIfAbsent(taskId, v -> CompletableFuture // return ongoing task if any, or launch a new one
.supplyAsync(() ->
doYourThing(taskId)) // get from DB or calculate or whatever
.whenCompleteAsync((integer, throwable) -> {
if (throwable != null) {
log.error("Failed task: {}", taskId, throwable);
}
tasks.remove(taskId);
})
);
/* Client class, i.e. your UserThread */
// Usage
YourTaskResult taskResult = taskProcessor.launchTask(taskId).get(); // block until we get a result
Any time a user asks for the result of a taskId, they will either:
enqueue a new task if they are the first to ask for this taskId; or
get the result of the ongoing task with id taskId, if someone else enqueued it first.
This is production code currently used by hundreds of users concurrently.
In our app, users ask for any given file, via a REST endpoint (every user on its own thread). Our taskIds are filenames, and our doYourThing(taskId) retrieves the file from the local filesystem or downloads it from an S3 bucket if it doesn't exist.
Obviously we don't want to download the same file more than once. With this solution I implemented, any number of users can ask for the same file at the same or different times, and the file will be downloaded exactly once. All users that asked for it while it was downloading will get it at the same time the moment it finishes downloading; all users that ask for it later, will get it instantly from the local filesystem.
Works like a charm.
What I understood from the question details is-
When UserThread requests for result, there are 3 possibilities:
Task has been already completed so no blocking of user thread and directly get result from DB.
Task is in queue or executing but not yet completed, so block the user thread(till now there should not be any db queries) and just after completion of task(the task result must be saved in DB at this point), unblock user thread(now user thread can query the DB for result)
There is no task submitted ever for the given uniqueIdentifier which user has requested, in this case there will be empty result from db.
For point 1 and 3, Its straight forward, there will not be any blocking of UserThread, just query the result from DB.
For point 2 - I have written a simple implementation of TaskProcessor. Here I have used ConcurrentHashMap to keep the current tasks which are not yet completed. This map contains the mapping between UniqueIdentifier and corresponding task. I have used computeIfPresent() (introduced in JAVA - 1.8) method of ConcurrentHashMap which guarantees that the invocation of this method is thread safe for the same key. Below is what java doc says:
Link
If the value for the specified key is present, attempts to compute a
new mapping given the key and its current mapped value. The entire
method invocation is performed atomically. Some attempted update
operations on this map by other threads may be blocked while
computation is in progress, so the computation should be short and
simple, and must not attempt to update any other mappings of this map.
So with use of this method, whenever there is a user thread request for a task T1 and if the task T1 is in queue or executing but not completed yet, then user thread will wait on that task.
When the task T1 will be completed, all the user requests thread which were waiting on task T1 will be notified and then we will remove task T1 from the above map.
Other classes reference used in below code are present on this link.
TaskProcessor.java:
import java.util.Map;
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.function.BiFunction;
public class TaskProcessor implements ITaskProcessor {
//This map will contain all the tasks which are in queue and not yet completed
//If there is scenario where there may be multiple tasks corresponding to same uniqueIdentifier, in that case below map can be modified accordingly to have the list of corresponding tasks which are not completed yet
private final Map<String, Task> taskInProgresssByUniqueIdentifierMap = new ConcurrentHashMap<>();
private final int QUEUE_SIZE = 100;
private final BlockingQueue<Task> taskQueue = new ArrayBlockingQueue<Task>(QUEUE_SIZE);
private final TaskRunner taskRunner = new TaskRunner();
private Executor executor;
private AtomicBoolean isStarted;
private final DBManager dbManager = new DBManager();
#Override
public void start() {
executor = Executors.newCachedThreadPool();
while(isStarted.get()) {
try {
Task task = taskQueue.take();
executeTaskInSeperateThread(task);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
private void executeTaskInSeperateThread(Task task) {
executor.execute(() -> {
taskRunner.execute(task, new ITaskProgressListener() {
#Override
public void onTaskCompletion(TaskResult taskResult) {
task.setCompleted(true);
//TODO: we can also propagate the taskResult to waiting users, Implement it if it is required.
notifyAllWaitingUsers(task);
}
#Override
public void onTaskFailure(Exception e) {
notifyAllWaitingUsers(task);
}
});
});
}
private void notifyAllWaitingUsers(Task task) {
taskInProgresssByUniqueIdentifierMap.computeIfPresent(task.getUniqueIdentifier(), new BiFunction<String, Task, Task>() {
#Override
public Task apply(String s, Task task) {
synchronized (task) {
task.notifyAll();
}
return null;
}
});
}
//User thread
#Override
public ITaskResult getTaskResult(String uniqueIdentifier) {
TaskResult result = null;
Task task = taskInProgresssByUniqueIdentifierMap.computeIfPresent(uniqueIdentifier, new BiFunction<String, Task, Task>() {
#Override
public Task apply(String s, Task task) {
synchronized (task) {
try {
//
task.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
return task;
}
});
//If task is null, it means the task was not there in queue, so we direcltly query the db for the task result
if(task != null && !task.isCompleted()) {
return null; // Handle this condition gracefully, If task is not completed, it means there was some exception
}
ITaskResult taskResult = getResultFromDB(uniqueIdentifier); // At this point the result must be already saved in DB if the corresponding task has been processed ever.
return taskResult;
}
private ITaskResult getResultFromDB(String uniqueIdentifier) {
return dbManager.getTaskResult(uniqueIdentifier);
}
//Other thread
#Override
public void enqueueTask(Task task) {
if(isStarted.get()) {
taskInProgresssByUniqueIdentifierMap.putIfAbsent(task.getUniqueIdentifier(), task);
taskQueue.offer(task);
}
}
#Override
public void stop() {
isStarted.compareAndSet(true, false);
}
}
Let me know in comments if you have any queries.
Thanks.

Mono vs CompletableFuture

CompletableFuture executes a task on a separate thread ( uses a thread-pool ) and provides a callback function. Let's say I have an API call in a CompletableFuture. Is that an API call blocking? Would the thread be blocked till it does not get a response from the API? ( I know main thread/tomcat thread will be non-blocking, but what about the thread on which CompletableFuture task is executing? )
Mono is completely non-blocking, as far as I know.
Please shed some light on this and correct me if I am wrong.
CompletableFuture is Async. But is it non-blocking?
One which is true about CompletableFuture is that it is truly async, it allows you to run your task asynchronously from the caller thread and the API such as thenXXX allows you to process the result when it becomes available. On the other hand, CompletableFuture is not always non-blocking. For example, when you run the following code, it will be executed asynchronously on the default ForkJoinPool:
CompletableFuture.supplyAsync(() -> {
try {
Thread.sleep(1000);
}
catch (InterruptedException e) {
}
return 1;
});
It is clear that the Thread in ForkJoinPool that executes the task will be blocked eventually which means that we can't guarantee that the call will be non-blocking.
On the other hand, CompletableFuture exposes API which allows you to make it truly non-blocking.
For example, you can always do the following:
public CompletableFuture myNonBlockingHttpCall(Object someData) {
var uncompletedFuture = new CompletableFuture(); // creates uncompleted future
myAsyncHttpClient.execute(someData, (result, exception -> {
if(exception != null) {
uncompletedFuture.completeExceptionally(exception);
return;
}
uncompletedFuture.complete(result);
})
return uncompletedFuture;
}
As you can see, the API of CompletableFuture future provides you with the complete and completeExceptionally methods that complete your execution whenever it is needed without blocking any thread.
Mono vs CompletableFuture
In the previous section, we got an overview of CF behavior, but what is the central difference between CompletableFuture and Mono?
It worth to mention that we can do blocking Mono as well. No one prevents us from writing the following:
Mono.fromCallable(() -> {
try {
Thread.sleep(1000);
}
catch (InterruptedException e) {
}
return 1;
})
Of course, once we subscribe to the future, the caller thread will be blocked. But we can always work around that by providing an additional subscribeOn operator. Nevertheless, the broader API of Mono is not the key feature.
In order to understand the main difference between CompletableFuture and Mono, lets back to previously mentioned myNonBlockingHttpCall method implementation.
public CompletableFuture myUpperLevelBusinessLogic() {
var future = myNonBlockingHttpCall();
// ... some code
if (something) {
// oh we don't really need anything, let's just throw an exception
var errorFuture = new CompletableFuture();
errorFuture.completeExceptionally(new RuntimeException());
return errorFuture;
}
return future;
}
In the case of CompletableFuture, once the method is called, it will eagerly execute HTTP call to another service/resource. Even though we will not really need the result of the execution after verifying some pre/post conditions, it starts the execution, and additional CPU/DB-Connections/What-Ever-Machine-Resources will be allocated for this work.
In contrast, the Mono type is lazy by definition:
public Mono myNonBlockingHttpCallWithMono(Object someData) {
return Mono.create(sink -> {
myAsyncHttpClient.execute(someData, (result, exception -> {
if(exception != null) {
sink.error(exception);
return;
}
sink.success(result);
})
});
}
public Mono myUpperLevelBusinessLogic() {
var mono = myNonBlockingHttpCallWithMono();
// ... some code
if (something) {
// oh we don't really need anything, let's just throw an exception
return Mono.error(new RuntimeException());
}
return mono;
}
In this case, nothing will happen until the final mono is subscribed. Thus, only when Mono returned by the myNonBlockingHttpCallWithMono method, will be subscribed, the logic provided to Mono.create(Consumer) will be executed.
And we can go even further. We can make our execution much lazier. As you might know, Mono extends Publisher from the Reactive Streams specification. The screaming feature of Reactive Streams is backpressure support. Thus, using the Mono API we can do execution only when the data is really needed, and our subscriber is ready to consume them:
Mono.create(sink -> {
AtomicBoolean once = new AtomicBoolean();
sink.onRequest(__ -> {
if(!once.get() && once.compareAndSet(false, true) {
myAsyncHttpClient.execute(someData, (result, exception -> {
if(exception != null) {
sink.error(exception);
return;
}
sink.success(result);
});
}
});
});
In this example, we execute data only when subscriber called Subscription#request so by doing that it declared its readiness to receive data.
Summary
CompletableFuture is async and can be non-blocking
CompletableFuture is eager. You can't postpone the execution. But you can cancel them (which is better than nothing)
Mono is async/non-blocking and can easily execute any call on different Thread by composing the main Mono with different operators.
Mono is truly lazy and allows postponing execution startup by the subscriber presence and its readiness to consume data.
Building up on Oleh's answer, a possible lazy solution for CompletableFuture would be
public CompletableFuture myNonBlockingHttpCall(CompletableFuture<ExecutorService> dispatch, Object someData) {
var uncompletedFuture = new CompletableFuture(); // creates uncompleted future
dispatch.thenAccept(x -> x.submit(() -> {
myAsyncHttpClient.execute(someData, (result, exception -> {
if(exception != null) {
uncompletedFuture.completeExceptionally(exception);
return;
}
uncompletedFuture.complete(result);
})
}));
return uncompletedFuture;
}
Then, later on you simply do
dispatch.complete(executor);
That would make CompletableFuture equivalent to Mono, but without backpressure, I guess.

How to test non-RxJava observables or async code in general?

I'm playing around with implementing my own observables or porting them from other languages for fun and profit.
The problem I've run into is that there's very little info on how to properly test observables or async code in general.
Consider the following test code:
// Create a stream of values emitted every 100 milliseconds
// `interval` uses Timer internally
final Stream<Number> stream =
Streams.interval(100).map(number -> number.intValue() * 10);
ArrayList<Number> expected = new ArrayList<>();
expected.add(0);
expected.add(10);
expected.add(20);
IObserver<Number> observer = new IObserver<Number>() {
public void next(Number x) {
assertEquals(x, expected.get(0));
expected.remove(0);
if(expected.size() == 0) {
stream.unsubscribe(this);
}
}
public void error(Exception e) {}
public void complete() {}
};
stream.subscribe(observer);
As soon as the stream is subscribed to, it emits the first value. onNext is called... And then the test exits successfully.
In JavaScript most test frameworks nowadays provide an optional Promise to the test case that you can call asynchronously on success/failure. Is anything similar available for Java?
Since the execution is asyncronious, you have to wait until is finish. You can just wait for some time in an old fashion way
your_code
wait(1000)
check results.
Or if you use Observables you can use TestSubscriber
In this example you can see how having an async operation we wait until the observer consume all items.
#Test
public void testObservableAsync() throws InterruptedException {
Subscription subscription = Observable.from(numbers)
.doOnNext(increaseTotalItemsEmitted())
.subscribeOn(Schedulers.newThread())
.subscribe(number -> System.out.println("Items emitted:" + total));
System.out.println("I finish before the observable finish. Items emitted:" + total);
new TestSubscriber((Observer) subscription)
.awaitTerminalEvent(100, TimeUnit.MILLISECONDS);
}
You can see more Asynchronous examples here https://github.com/politrons/reactive/blob/master/src/test/java/rx/observables/scheduler/ObservableAsynchronous.java

Categories