RxJava Combine Sequence Of Requests - java

The Problem
I have two Apis. Api 1 gives me a List of Items and Api 2 gives me more detailed Information for each of the items I got from Api 1. The way I solved it so far results in bad Performance.
The Question
Efficent and fast solution to this Problem with the help of Retrofit and RxJava.
My Approach
At the Moment my Solution Looks like this:
Step 1: Retrofit executes Single<ArrayList<Information>> from Api 1.
Step 2: I iterate through this Items and make a request for each to Api 2.
Step 3: Retrofit Returns Sequentially executes Single<ExtendedInformation> for
each item
Step 4: After all calls form Api 2 completely executed I create a new Object for all Items combining the Information and Extended Information.
My Code
public void addExtendedInformations(final Information[] informations) {
final ArrayList<InformationDetail> informationDetailArrayList = new ArrayList<>();
final JSONRequestRatingHelper.RatingRequestListener ratingRequestListener = new JSONRequestRatingHelper.RatingRequestListener() {
#Override
public void onDownloadFinished(Information baseInformation, ExtendedInformation extendedInformation) {
informationDetailArrayList.add(new InformationDetail(baseInformation, extendedInformation));
if (informationDetailArrayList.size() >= informations.length){
listener.onAllExtendedInformationLoadedAndCombined(informationDetailArrayList);
}
}
};
for (Information information : informations) {
getExtendedInformation(ratingRequestListener, information);
}
}
public void getRatingsByTitle(final JSONRequestRatingHelper.RatingRequestListener ratingRequestListener, final Information information) {
Single<ExtendedInformation> repos = service.findForTitle(information.title);
disposable.add(repos.subscribeOn(Schedulers.io()).observeOn(AndroidSchedulers.mainThread()).subscribeWith(new DisposableSingleObserver<ExtendedInformation>() {
#Override
public void onSuccess(ExtendedInformation extendedInformation) {
ratingRequestListener.onDownloadFinished(information, extendedInformation);
}
#Override
public void onError(Throwable e) {
ExtendedInformation extendedInformation = new ExtendedInformation();
ratingRequestListener.onDownloadFinished(extendedInformation, information);
}
}));
}
public interface RatingRequestListener {
void onDownloadFinished(Information information, ExtendedInformation extendedInformation);
}

tl;dr use concatMapEager or flatMap and execute sub-calls asynchronously or on a schedulers.
long story
I'm not an android developer, so my question will be limited to pure RxJava (version 1 and version 2).
If I get the picture right the needed flow is :
some query param
\--> Execute query on API_1 -> list of items
|-> Execute query for item 1 on API_2 -> extended info of item1
|-> Execute query for item 2 on API_2 -> extended info of item1
|-> Execute query for item 3 on API_2 -> extended info of item1
...
\-> Execute query for item n on API_2 -> extended info of item1
\----------------------------------------------------------------------/
|
\--> stream (or list) of extended item info for the query param
Assuming Retrofit generated the clients for
interface Api1 {
#GET("/api1") Observable<List<Item>> items(#Query("param") String param);
}
interface Api2 {
#GET("/api2/{item_id}") Observable<ItemExtended> extendedInfo(#Path("item_id") String item_id);
}
If the order of the item is not important, then it is possible to use flatMap only:
api1.items(queryParam)
.flatMap(itemList -> Observable.fromIterable(itemList)))
.flatMap(item -> api2.extendedInfo(item.id()))
.subscribe(...)
But only if the retrofit builder is configured with
Either with the async adapter (calls will be queued in the okhttp internal executor). I personally think this is not a good idea, because you don't have control over this executor.
.addCallAdapterFactory(RxJava2CallAdapterFactory.createAsync()
Or with the scheduler based adapter (calls will be scheduled on the RxJava scheduler). It would my preferred option, because you explicitly choose which scheduler is used, it will be most likely the IO scheduler, but you are free to try a different one.
.addCallAdapterFactory(RxJava2CallAdapterFactory.createWithScheduler(Schedulers.io()))
The reason is that flatMap will subscribe to each observable created by api2.extendedInfo(...) and merge them in the resulting observable. So results will appear in the order they are received.
If the retrofit client is not set to be async or set to run on a scheduler, it is possible to set one :
api1.items(queryParam)
.flatMap(itemList -> Observable.fromIterable(itemList)))
.flatMap(item -> api2.extendedInfo(item.id()).subscribeOn(Schedulers.io()))
.subscribe(...)
This structure is almost identical to the previous one execpts it indicates locally on which scheduler each api2.extendedInfo is supposed to run.
It is possible to tune the maxConcurrency parameter of flatMap to control how many request you want to perform at the same time. Although I'd be cautious on this one, you don't want run all queries at the same time. Usually the default maxConcurrency is good enough (128).
Now if order of the original query matter. concatMap is usually the operator that does the same thing as flatMap in order but sequentially, which turns out to be slow if the code need to wait for all sub-queries to be performed. The solution though is one step further with concatMapEager, this one will subscribe to observable in order, and buffer the results as needed.
Assuming retrofit clients are async or ran on a specific scheduler :
api1.items(queryParam)
.flatMap(itemList -> Observable.fromIterable(itemList)))
.concatMapEager(item -> api2.extendedInfo(item.id()))
.subscribe(...)
Or if the scheduler has to be set locally :
api1.items(queryParam)
.flatMap(itemList -> Observable.fromIterable(itemList)))
.concatMapEager(item -> api2.extendedInfo(item.id()).subscribeOn(Schedulers.io()))
.subscribe(...)
It is also possible to tune the concurrency in this operator.
Additionally if the Api is returning Flowable, it is possible to use .parallel that is still in beta at this time in RxJava 2.1.7. But then results are not in order and I don't know a way (yet?) to order them without sorting after.
api.items(queryParam) // Flowable<Item>
.parallel(10)
.runOn(Schedulers.io())
.map(item -> api2.extendedInfo(item.id()))
.sequential(); // Flowable<ItemExtended>

the flatMap operator is designed to cater to these types of workflows.
i'll outline the broad strokes with a simple five step example. hopefully you can easily reconstruct the same principles in your code:
#Test fun flatMapExample() {
// (1) constructing a fake stream that emits a list of values
Observable.just(listOf(1, 2, 3, 4, 5))
// (2) convert our List emission into a stream of its constituent values
.flatMap { numbers -> Observable.fromIterable(numbers) }
// (3) subsequently convert each individual value emission into an Observable of some
// newly calculated type
.flatMap { number ->
when(number) {
1 -> Observable.just("A1")
2 -> Observable.just("B2")
3 -> Observable.just("C3")
4 -> Observable.just("D4")
5 -> Observable.just("E5")
else -> throw RuntimeException("Unexpected value for number [$number]")
}
}
// (4) collect all the final emissions into a list
.toList()
.subscribeBy(
onSuccess = {
// (5) handle all the combined results (in list form) here
println("## onNext($it)")
},
onError = { error ->
println("## onError(${error.message})")
}
)
}
(incidentally, if the order of the emissions matter, look at using concatMap instead).
i hope that helps.

Check below it's working.
Say you have multiple network calls you need to make–cals to get Github user information and Github user events for example.
And you want to wait for each to return before updating the UI. RxJava can help you here.
Let’s first define our Retrofit object to access Github’s API, then setup two observables for the two network requests call.
Retrofit repo = new Retrofit.Builder()
.baseUrl("https://api.github.com")
.addConverterFactory(GsonConverterFactory.create())
.addCallAdapterFactory(RxJavaCallAdapterFactory.create())
.build();
Observable<JsonObject> userObservable = repo
.create(GitHubUser.class)
.getUser(loginName)
.subscribeOn(Schedulers.newThread())
.observeOn(AndroidSchedulers.mainThread());
Observable<JsonArray> eventsObservable = repo
.create(GitHubEvents.class)
.listEvents(loginName)
.subscribeOn(Schedulers.newThread())
.observeOn(AndroidSchedulers.mainThread());
Used Interface for it like below:
public interface GitHubUser {
#GET("users/{user}")
Observable<JsonObject> getUser(#Path("user") String user);
}
public interface GitHubEvents {
#GET("users/{user}/events")
Observable<JsonArray> listEvents(#Path("user") String user);
}
After we use RxJava’s zip method to combine our two Observables and wait for them to complete before creating a new Observable.
Observable<UserAndEvents> combined = Observable.zip(userObservable, eventsObservable, new Func2<JsonObject, JsonArray, UserAndEvents>() {
#Override
public UserAndEvents call(JsonObject jsonObject, JsonArray jsonElements) {
return new UserAndEvents(jsonObject, jsonElements);
}
});
Finally let’s call the subscribe method on our new combined Observable:
combined.subscribe(new Subscriber<UserAndEvents>() {
...
#Override
public void onNext(UserAndEvents o) {
// You can access the results of the
// two observabes via the POJO now
}
});
No more waiting in threads etc for network calls to finish. RxJava has done all that for you in zip().
hope my answer helps you.

I solved a similar problem with RxJava2. Execution of requests for Api 2 in parallel slightly speeds up the work.
private InformationRepository informationRepository;
//init....
public Single<List<FullInformation>> getFullInformation() {
return informationRepository.getInformationList()
.subscribeOn(Schedulers.io())//I usually write subscribeOn() in the repository, here - for clarity
.flatMapObservable(Observable::fromIterable)
.flatMapSingle(this::getFullInformation)
.collect(ArrayList::new, List::add);
}
private Single<FullInformation> getFullInformation(Information information) {
return informationRepository.getExtendedInformation(information)
.map(extendedInformation -> new FullInformation(information, extendedInformation))
.subscribeOn(Schedulers.io());//execute requests in parallel
}
InformationRepository - just interface. Its implementation is not interesting for us.
public interface InformationRepository {
Single<List<Information>> getInformationList();
Single<ExtendedInformation> getExtendedInformation(Information information);
}
FullInformation - container for result.
public class FullInformation {
private Information information;
private ExtendedInformation extendedInformation;
public FullInformation(Information information, ExtendedInformation extendedInformation) {
this.information = information;
this.extendedInformation = extendedInformation;
}
}

Try using Observable.zip() operator. It will wait until both Api calls are finished before continuing the stream. Then you can insert some logic by calling flatMap() afterwards.
http://reactivex.io/documentation/operators/zip.html

Related

Mutiny - How to group items to send request by blocks

I'm using Mutiny extension (for Quarkus) and I don't know how to manage this problem.
I want to send many request in an async way so I've read about Mutiny extension. But the server closes the connection because it receives thousand of them.
So I need:
Send the request by blocks
After all request are sent, do things.
I've been using Uni object to combine all the responses as this:
Uni<Map<Integer, String>> uniAll = Uni.combine()
.all()
.unis(list)
.combinedWith(...);
And then:
uniAll.subscribe()
.with(...);
This code, send all the request in paralell so the server closes the connection.
I'm using group of Multi objects, but I don't know how to use it (in Mutiny docs I can't found any example).
This is the way I'm doing now:
//Launch 1000 request
for (int i=0;i<1000;i++) {
multi = client.getAbs("https://api.*********.io/jokes/random")
.as(BodyCodec.jsonObject())
.send()
.onItem().transformToMulti(
array -> Multi.createFrom()
.item(array.body().getString("value")))
.group()
.intoLists()
.of(100)
.subscribe()
.with(a->{
System.out.println("Value: "+a);
});
}
I think that the subscription doesn't execute until there are "100" groups of items, but I guess this is not the way because it doesn't work.
Does anybody know how to launch 1000 of async requests in blocks of 100?
Thanks in advance.
UPDATED 2021-04-19
I've tried with this approach:
List<Uni<String>> listOfUnis = new ArrayList<>();
for (int i=0;i<1000;i++) {
listOfUnis.add(client
.getAbs("https://api.*******.io/jokes/random")
.as(BodyCodec.jsonObject())
.send()
.onItem()
.transform(item -> item
.body()
.getString("value")));
}
Multi<Uni<String>> multiFormUnis = Multi.createFrom()
.iterable(listOfUnis);
List<String> listOfResponses = new ArrayList<>();
List<String> listOfValues = multiFormUnis.group()
.intoLists()
.of(100)
.onItem()
.transformToMultiAndConcatenate(listOfOneHundred ->
{
System.out.println("Size: "+listOfOneHundred.size());
for (int index=0;index<listOfOneHundred.size();index++) {
listOfResponses.add(listOfOneHundred.get(index)
.await()
.indefinitely());
}
return Multi.createFrom()
.iterable(listOfResponses);
})
.collectItems()
.asList()
.await()
.indefinitely();
for (String value : listOfValues) {
System.out.println(value);
}
When I put this line:
listOfResponses.add(listOfOneHundred.get(index)
.await()
.indefinitely());
The responses are printed one after each other, and when the first 100s group of items ends, it prints the next group. The problem? There are sequential requests and it takes so much time
I think I am close to the solution, but I need to know, how to send the parallel request only in group of 100s, because if I put:
subscribe().with()
All the request are sent in parallel (and not in group of 100s)
I think you create the multy wrong, it would be much easier to use this:
Multi<String> multiOfJokes = Multi.createFrom().emitter(multiEmitter -> {
for (int i=0;i<1000;i++) {
multiEmitter.emit(i);
}
multiEmitter.complete();
}).onItem().transformToUniAndMerge(index -> {
return Uni.createFrom().item("String" + index);
})
With this approach it should mace the call parallel.
Now is the question of how to make it to a list.
The grouping works fine
I run it with this code:
Random random = new Random();
Multi<Integer> multiOfInteger = Multi.createFrom().emitter(multiEmitter -> {
for (Integer i=0;i<1000;i++) {
multiEmitter.emit(i);
}
multiEmitter.complete();
});
Multi<String> multiOfJokes = multiOfInteger.onItem().transformToUniAndMerge(index -> {
if (index % 10 == 0 ) {
Duration delay = Duration.ofMillis(random.nextInt(100) + 1);
return Uni.createFrom().item("String " + index + " delayed").onItem()
.delayIt().by(delay);
}
return Uni.createFrom().item("String" + index);
}).onCompletion().invoke(() -> System.out.println("Completed"));
Multi<List<String>> multiListJokes = multiOfJokes
.group().intoLists().of(100)
.onCompletion().invoke(() -> System.out.println("Completed"))
.onItem().invoke(strings -> System.out.println(strings));
multiListJokes.collect().asList().await().indefinitely();
You will get a list of your string.
I don't know, how you intend to send the list to backend.
But you can either to it with:
call (executed asynchronously)
write own subscriber (implements Subscriber) the methods are straight forward.
As you need for your bulk request.
I hope you understand it better afterward.
PS: link to guide where I learned all of it:
https://smallrye.io/smallrye-mutiny/guides
So in short you want to batch parallel calls to the server, without hitting it with everything at once.
Could this work for you? It uses merge. In my example, it has a parallelism of 2.
Multi.createFrom().range(1, 10)
.onItem()
.transformToUni(integer -> {
return <<my long operation Uni>>
})
.merge(2) //this is the concurrency
.collect()
.asList();
I'm not sure if merge was added later this year, but this seems to do what you want. In my example, the "long operation producing Uni" is actually a call to the Microprofile Rest Client which produces a Uni, and returns a string. After the merge you can put another onItem to perform something with the response (it's a plain Multi after the merge), instead of collecting everything as list.

Java Reactive stream how to map an object when the object being mapped is also needed on the next step of the stream

I am using Java 11 and project Reactor (from Spring). I need to make a http call to a rest api (I can only make it once in the whole flow).
With the response I need to compute two things:
Check if a document exists in the database (mongodb). If it does not exists then create it and return it. Otherwise just return it.
Compute some logic on the response and we are done.
In pseudo code it is something like this:
public void computeData(String id) {
httpClient.getData(id) // Returns a Mono<Data>
.flatMap(data -> getDocument(data.getDocumenId()))
// Issue here is we need access to the data object consumed in the previous flatMap but at the same time we also need the document object we get from the previous flatMap
.flatMap(document -> calculateValue(document, data))
.subscribe();
}
public Mono<Document> getDocument(String id) {
// Check if document exists
// If not create document
return document;
}
public Mono<Value> calculateValue(Document doc, Data data) {
// Do something...
return value;
}
The issue is that calculateValue needs the return value from http.getData but this was already consumed on the first flatMap but we also need the document object we get from the previous flatMap.
I tried to solve this issue using Mono.zip like below:
public void computeData(String id) {
final Mono<Data> dataMono = httpClient.getData(id);
Mono.zip(
new Mono<Mono<Document>>() {
#Override
public void subscribe(CoreSubscriber<? super Mono<Document>> actual) {
final Mono<Document> documentMono = dataMono.flatMap(data -> getDocument(data.getDocumentId()))
actual.onNext(documentMono);
}
},
new Mono<Mono<Value>>() {
#Override
public void subscribe(CoreSubscriber<? super Mono<Value>> actual) {
actual.onNext(dataMono);
}
}
)
.flatMap(objects -> {
final Mono<Document> documentMono = objects.getT1();
final Mono<Data> dataMono = objects.getT2();
return Mono.zip(documentMono, dataMono, (document, data) -> calculateValue(document, data))
})
}
But this is executing the httpClient.getData(id) twice which goes against my constrain of only calling it once. I understand why it is being executed twice (I subscribe to it twice).
Maybe my solution design can be improved somewhere but I do not see where. To me this sounds like a "normal" issue when designing reactive code but I could not find a suitable solution to it so far.
My question is, how can accomplish this flow in a reactive and non blocking way and only making one call to the rest api?
PS; I could add all the logic inside one single map but that would force me to subscribe to one of the Mono inside the map which is not recommended and I want to avoid following this approach.
EDIT regarding #caco3 comment
I need to subscribe inside the map because both getDocument and calculateValue methods return a Mono.
So, if I wanted to put all the logic inside one single map it would be something like:
public void computeData(String id) {
httpClient.getData(id)
.map(data -> getDocument(data).subscribe(s -> calculateValue(s, data)))
.subscribe();
}
You do not have to subscribe inside map, just continue building the reactive chain inside the flatMap:
getData(id) // Mono<Data>
.flatMap(data -> getDocument(data.getDocumentId()) // Mono<Document>
.switchIfEmpty(createDocument(data.getDocumentId())) // Mono<Document>
.flatMap(document -> calculateValue(document, data)) // Mono<Value>
)
.subscribe()
Boiling it down, your problem is analogous to:
Mono.just(1)
.flatMap(original -> process(original))
.flatMap(processed -> I need access to the original value and the processed value!
System.out.println(original); //Won't work
);
private static Mono<String> process(int in) {
return Mono.just(in + " is an integer").delayElement(Duration.ofSeconds(2));
}
(Silly example, I know.)
The problem is that map() (and by extension, flatMap()) are transformations - you get access to the new value, and the old one goes away. So in your second flatMap() call, you've got access to 1 is an integer, but not the original value (1.)
The solution here is to, instead of mapping to the new value, map to some kind of merged result that contains both the original and new values. Reactor provides a built in type for that - a Tuple. So editing our original example, we'd have:
Mono.just(1)
.flatMap(original -> operation(original))
.flatMap(processed -> //Help - I need access to the original value and the processed value!
System.out.println(processed.getT1()); //Original
System.out.println(processed.getT2()); //Processed
///etc.
);
private static Mono<Tuple2<Integer, String>> operation(int in) {
return Mono.just(in + " is an integer").delayElement(Duration.ofSeconds(2))
.map(newValue -> Tuples.of(in, newValue));
}
You can use the same strategy to "hold on" to both document and data - no need for inner subscribes or anything of the sort :-)

Using reactor's Flux.buffer to batch work only works for single item

I'm trying to use Flux.buffer() to batch up loads from a database.
The use case is that loading records from a DB may be 'bursty', and I'd like to introduce a small buffer to group together loads where possible.
My conceptual approach has been to use some form of processor, publish to it's sink, let that buffer, and then subscribe & filter for the result I want.
I've tried multiple different approaches (different types of processors, creating the filtered Mono in different ways).
Below is where I've gotten so far - largely by stumbling.
Currently, this returns a single result, but subsequent calls are dropped (though I'm unsure of where).
class BatchLoadingRepository {
// I've tried all manner of different processors here. I'm unsure if
// TopicProcessor is the correct one to use.
private val bufferPublisher = TopicProcessor.create<String>()
private val resultsStream = bufferPublisher
.bufferTimeout(50, Duration.ofMillis(50))
// I'm unsure if concatMapIterable is the correct operator here,
// but it seems to work.
// I'm really trying to turn the List<MyEntity>
// into a stream of MyEntity, published on the Flux<>
.concatMapIterable { requestedIds ->
// this is a Spring Data repository. It returns List<MyEntity>
repository.findAllById(requestedIds)
}
// Multiple callers will invoke this method, and then subscribe to receive
// their entity back.
fun findByIdAsync(id: String): Mono<MyEntity> {
// Is there a potential race condition here, caused by a result
// on the resultsStream, before I've subscribed?
return Mono.create<MyEntity> { sink ->
bufferPublisher.sink().next(id)
resultsStream.filter { it.id == id }
.subscribe { next ->
sink.success(next)
}
}
}
}
Hi i was testing your code and i think the best way is to use EmitterProcessor shared. I did a test with emitterProcessor and it seems to work.
Flux<String> fluxi;
EmitterProcessor emitterProcessor;
#Override
public void run(String... args) throws Exception {
emitterProcessor = EmitterProcessor.create();
fluxi = emitterProcessor.share().bufferTimeout(500, Duration.ofMillis(500))
.concatMapIterable(o -> o);
Flux.range(0,1000)
.flatMap(integer -> findByIdAsync(integer.toString()))
.map(s -> {
System.out.println(s);
return s;
}).subscribe();
}
private Mono<String> findByIdAsync(String id) {
return Mono.create(monoSink -> {
fluxi.filter(s -> s == id).subscribe(value -> monoSink.success(value));
emitterProcessor.onNext(id);
});
}

RxJava sorted output from parallell computation

I have a list of tasks I want to perform in parallell, but I want to display the result of the tasks in the same order as the original list.
In other words, if I have task list [A,B,C], I do not wish to show B-result before I have shown A-result, but nor do I want to wait until A-task is finished before starting B-task.
Additionally, I want to show each result as soon as possible, in other words, if the tasks finish in the order B, then A, then C, I do not want to show anything when I receive B-result, then show A-result immediately followed by B-result when I receive A-result, then show C-result whenever I receive it.
This is of course not terribly tricky to do by making an Observable for each task, combining them with merge, and subscribing on a computation thread pool, then writing a Subscriber which holds a buffer for any results received out of order. However, the Rx rule of thumb tends to be "there's already an operator for that", so the question is "what is the proper RxJava way to solve this?" if indeed there is such a thing.
It seems you need concatEager for this task but it is somewhat possible to achieve it with pre 1.0.15 tools and no need for "creating" Observables. Here is an example for that:
Observable<Long> source1 = Observable.interval(100, 100, TimeUnit.MILLISECONDS).take(10);
Observable<Long> source2 = Observable.interval(100, 100, TimeUnit.MILLISECONDS).take(20);
Observable<Long> source3 = Observable.interval(100, 100, TimeUnit.MILLISECONDS).take(15);
Observable<Observable<Long>> sources = Observable.just(source1, source2, source3);
sources.map(v -> {
Observable<Long> c = v.cache();
c.subscribe(); // to cache all
return c;
})
.onBackpressureBuffer() // make sure all source started
.concatMap(v -> v)
.toBlocking()
.forEach(System.out::println);
The drawback is that it retains all values for the whole duration of the sequence. This can be fixed with a special kind of Subject: UnicastSubject but RxJava 1.x doesn't have one and may not get one "officially". You can, however, look at one of my blog posts and build if for yourself and have the following code:
//...
sources.map(v -> {
UnicastSubject<Long> subject = UnicastSubject.create();
v.subscribe(subject);
return subject;
})
//...
"There's not quite an operator for that". Although, in the 1.0.15-SNAPSHOT build there is an experimental concatEagar() operator those sounds like it does what you're looking for. Pull request for concatEager
repositories {
maven { url 'https://oss.jfrog.org/libs-snapshot' }
}
dependencies {
compile 'io.reactivex:rxjava:1.0.15-SNAPSHOT'
}
If you want to roll your own temporary solution until concatEager() gets the nod of approval. You could try something like this:
public Observable<Result> concatEager(final Observable<Result> taskA, final Observable<Result> taskB, final Observable<Result> taskC) {
return Observable
.create(subscriber -> {
final Observable<Result> taskACached = taskA.cache();
final Observable<Result> taskBCached = taskB.cache();
final Observable<Result> taskCCached = taskC.cache();
// Kick off all the tasks simultaneously.
subscriber.add(
Observable
.merge(taskACached, taskBCached, taskCCached)
.subscribe(
result -> { // Throw away result
},
throwable -> { // Ignore errors
}
)
);
// Put the results in order.
subscriber.add(
Observable
.concat(taskACached, taskBCached, taskCCached)
.subscribe(subscriber)
);
});
}
Note that the above code is totally untested. There are probably better ways to do this but this is what first came to mind...

How do I chain execution two indepentent Observables serially without nesting the calls?

Using RxJava I have an Observable<A> and an Observable<B>. I want to start subscription on B as soon as the first (and only) element of A is emitted. I know I can chain it like this:
final Observable<A> obsOfA;
final Observable<B> obsOfB;
obsOfA.subscribe(new Action1<A>() {
#Override
public void call(A a) {
obsOfB.subscribe(...)
}
});
..But this will cause a nesting syntax which gets ugly as soon as we introduce Observable<C>. How can I "unwrap" the syntax to a more fluent one - getting one that is more like the javascript Promise.then()-flow?
You should use flatMap:
obsOfA.flatMap(new Func1<A, Observable<B>>() {
#Override
public Observable<B> call(A a) {
return obsOfB;
}
})
.subscribe(/* obsOfB has completed */);
Every time obsOfA calls onNext(a), call will be executed with this value a.
You can use switch, combined with map in switchMap:
obsOfA.switchMap(i -> obsOfB)
.subscribe(/* obsOfB has completed */);
This does almost the same as merge in flatMap as long as obsOfA only yield 1 value, but when it yield more values, flatmap will combine them, while switch will only be subscribed to the last instance of obsOfB. This might be useful when you need to switch to a different stream.

Categories