My question is whether or not Flux has the ability to behave like an Observable or BehaviorSubject. I think I get the gist of what a Flux does and how, but every tutorial I see creates a Flux of static content, i.e. some pre-existing array of numbers which are finite in nature.
However, I want my Flux to be a stream of unknown values over time... like an Observable or BehaviorSubject. With those, you can create a method like setNextValue(String value), and pump those values to all subscribers of the Observable/BehaviorSubject etc.
Is this possible with a Flux? Or does the Flux have to be composed of an Observable type stream of values first?
Update
I answered my own question with an implementation down below. The accepted answer will lead down same path probably, but slightly complicated.
every tutorial I see creates a Flux of static content, i.e. some pre-existing array of numbers which are finite in nature.
You'll see this because most tutorials focus on how to manipulate & use a Flux - but the implication here (that you can just use a Flux with static, fixed-length content) is both unfortunate, and wrong. It's much more powerful than that, and using it with such static content is almost certainly not how you see it used in the real-world.
There's essentially 3 different ways of instantiating a Flux to emit elements dynamically as you describe:
However, I want my Flux to be a stream of unknown values over time... like an Observable or BehaviorSubject. With those, you can create a method like setNextValue(String value), and pump those values to all subscribers of the Observable/BehaviorSubject etc.
Absolutely - have a look at Flux.push(). This exposes an emitter, and emitter.next(value) can be called whenever you wish. This stream can go on for as long as you want it to (infinitely, if desired.) Flux.create() is essentially the multi-threaded variant of Flux.push(), which may also be of use.
Flux.generate() may also be worth a look - this is a bit like an "on-demand" version of Flux.push(), where you only emit the next element via a callback when the downstream consumer requests it, rather than emitting whenever you want to. This isn't always practical, but it makes sense to use this method if the use-case makes it feasible, as it respects backpressure and thus can be guaranteed not to overwhelm the consumer with more requests than it can handle.
This can be achieved like this:
private EmitterProcessor<String> processor;
private FluxSink<String> statusSink;
private Flux<String> status;
public constructor() {
this.processor = EmitterProcessor.create();
this.statusSink = this.processor.sink(FluxSink.OverflowStrategy.BUFFER);
this.status = this.processor.publish().autoConnect();
}
public Flux<String> getStatus() {
return this.status;
}
public void setStatus(String status) {
this.statusSink.next(status);
}
Related
Using non-blocking calls I want to generically take a Mono and call a method that returns a Flux and for each item in the Flux, call a method that returns Mono to return a Flux which is a an aggregate object of Bar + Foo + Bar and has as many elements as the Flux method returns (will return).
As a concrete example:
Methods:
Flux<Bar> getBarsByFoo(Foo foo);
Mono<More> getMoreByBar(Bar bar);
Combined getCombinedFrom(Bar bar, Foo foo, More more);
Working code section:
Flux<Combined> getCombinedByFoo(Foo foo) {
getBarsByFoo(foo)...
}
From a blocking perspective what I want to accomplish is:
List<Combined> getCombinedByFoo(Foo foo) {
List<Bar> bars = getBarsByFoo(foo):
List<Combined> combinedList = new ArrayList<>(bars.size());
for (Bar bar: bars) {
More more = getMoreByBar(bar);
combinedList.append(getCombinedFrom(bar, foo, more));
}
return combinedList;
}
Any help on which Flux and Mono methods to use would be appreciated. I am still learning to change my brain into non-blocking thinking. Conceptually, I think there is a function to apply to each element (Bar) in from getBarsByFoo(Foo foo) to somehow map that to the combined element...
I like to think about Reactor programming as a flow of operations (as in flow programming), as a chain/DAG of operation.
In your case, you want to:
map each emitted Bar object to a Combined object.
Along the way, you need to use/call another publisher to fetch additional information:
you need to wait for it to complete so you can fetch its output value. In the case of Monads/streams, there's a flatMap operation for it.
flatMap waits for (or you can say that it extracts) a different publisher value to integrate it in the current chain of operations. I think it is called flatMap because in a sense, we break a level of hierarchy to flatten two nested publishers/monads in a single merged one.
The following example show a reactive version of your method (for a less verbose version, see Toerktumlare answer:
Flux<Combined> combine(Foo foo) {
Flux<Bar> bars = getBarBy(foo);
Flux<Combined> result = bars.flatMap(bar -> {
Mono<More> nextMore = getMoreBy(bar);
Mono<Combined> next = nextMore.map(more -> getCombinedFrom(foo, bar, more));
return next;
);
return result;
}
If you get your foo object through a Mono, you can just call flatMapMany on it:
Mono<Foo> nextFoo = ...;
Flux<Combined> = nextFoo.flatMapMany(foo -> combine(foo));
WARNING
flatMap is very powerful: it can trigger concurrent execution of the provided operation. In your case, it means that many getMoreBy(bar) operations can be launched at the same time. But it is a double-edged sword, because then it means that:
ordering of elements is not preserved (or at least, there's no guarantee)
In resource constrained system, having multiple operations launched at the same time could hurt performance or cause harm to the system (like, too many files open, etc.)
The concurrency behavior is quite high by default (256) and can be controlled in different ways:
flatMap accepts an optional concurrency argument, to adapt the number of tasks allowed to run at the same time.
There are other operators that flatten publishers, but manage work differently, like concatMap: it enforces sequential execution (and therefore, preserve ordering) of mapping tasks.
Something like this:
Flux<Combined> getCombinedByFoo(Foo foo) {
return getBarsByFoo(foo)
.flatMap(bar -> getMoreByBar(bar)
.flatMap(more -> getCombinedFrom(bar, foo, more)))
}
I dont have an idea to check, i wrote this by free hand but something like this i guess.
I have a service with which I have registered a call back, and now I want to expose it as Flowable with certain requirements/limitations:
Thread receiving callback should not be blocked (work should be handed off to a different thread/scheduler specified by the observer)
There should not be any exceptions thrown due to consumers being slow down stream
Multiple consumers can subscribe to it independent of each other
consumers can choose to buffer all the items so that none of them are lost, however they should not be buffered in the 'producer' class
Below is what I have currently
class MyBroadcaster {
private PublishProcessor<Packet> packets = PublishProcessor.create();
private Flowable<Packet> backpressuredPackets = packets.onBackpressureLatest();
public MyBroadcaster() {
//this is actually different to my exact use but same conceptually
registerCallback(packets::onNext);
}
public Flowable<Packet> observeAllPacketsOn(Scheduler scheduler) {
return backpressuredPackets.observeOn(scheduler);
}
}
I'm not sure if this actually fits my requirements. There's a note on the onBackpressureLatest javadoc regarding observeOn that I don't understand:
Note that due to the nature of how backpressure requests are propagated through subscribeOn/observeOn, requesting more than 1 from downstream doesn't guarantee a continuous delivery of onNext events
And I have other questions:
Does the onBackpressureLatest call make it so that the items are no longer multicasted?
How can I test my requirements?
Bonus: If I have multiple such publishers (in same class or elsewhere) , what is the best way to make the same pattern reusable. Create my own Flowable with delegation/extra methods?
I'm not sure if this actually fits my requirements.
It does not. Apply either onBackpressureLatest or onBackpressureBuffer followed by observeOn in the observeSomePacketsOn and observeAllPacketsOn respectively.
Does the onBackpressureLatest call make it so that the items are no longer multicasted?
The multicasting is done by PublishProcessor and different subscribers will establish a channel to it independently where the onBackpressureXXX and observeOn operators take effect on an individual subscriber basis.
How can I test my requirements?
Subscribe through the lossy or lossless Flowable with a TestSubscriber (Flowable.test()), feed a known set of Packets into packets and see if all of them arrived either via TestSubscriber.assertValueCount() or TestSubscriber.values(). The lossy one should be 1 .. N and the lossless one should have N values after a grace period.
Bonus: If I have multiple such publishers (in same class or elsewhere) , what is the best way to make the same pattern reusable. Create my own Flowable with delegation/extra methods?
You could turn the observeAllPacketsOn into a FlowableTransformer and instead of a method call on MyBroadcaster, use compose, for example:
class MyTransformers {
public static FlowableTransformer<T, T> lossyObserveOn(Scheduler s) {
return f -> f.onBackpressureLatest().observeOn(s);
}
}
new MyBroadcaster().getPacketFlow()
.compose(MyTransformers.lossyObserveOn(scheduler))
.subscribe(/* ... */);
Suppose that we need to transform a hot Observable in a way that we need to know all of its previously emitted items to be able to determine what to emit next. The solution which I find the most convenient is to pass an instance of a Func1 subclass, which has a global state (e.g. a map or list of previously emitted items) to flatMap. In each call, the Func1 instance would update its state and based on that, decide what to return.
However, I am worried about the "niceness" of this solution. As far as I know, RxJava does not go well with global and mutable state, which this solution seems to be in contrast with. On the other hand, I am sure that my Observable fulfills the Observable contract, so it seems to be at least a working solution, and event if it could be called concurrently, a synchronization would solve the problem.
Other possible solutions could be:
Creating an Operator. Mutable state in Operators is allowed, I guess. Anyways, I try to avoid custom operators, as they are more tricky.
Propagating the history of the Observable through scan (in a List or Map). I would either use the same object (List or Map) for every emitted item, which introduces a mutable object into the stream, or copy the entire object every time, which would waste a lot of performance.
Subscribe to the original Observable, modify some global state from the subscriber, and emit items on a Subject (the transformed Observable) using this global state. I thought about this because it seems to exit the scope of RxJava when it deals with the global state (and synchronization).
So the question is: Should I use the Func1 implementation with mutable state in flatMap for transforming items based on the history of previously emitted items (which works, btw), and if not, what alternatives should I use? In general, I am confused about the recommended way to handle a complex mutable state needed for the transformation of Observables.
I hope I have expressed my problem clearly. Otherwise, let me know and I will try to describe it with the help of some specific problems and code.
Flows with functions containing mutable state are generally not recommended as the mutable state could be potentially shared across multiple Subscribers to a particular Observable chain. Often though, most developers assemble Observables when needed and rarely ever reuse the same Observable. For example, a button click handler will create an Observable that, through composition, forks off two other Observables to get data from two different places asynchronously, and then subscribe to this thread-local Observable instance. A new button click will repeat the process with a fresh and independent Observable.
Here lies the solution to your stateful-function problem: make the existence of the stateful bits depend on the individual Subscribers subscribing: defer()
Observable<Integer> o = Observable.defer(() -> {
return Observable.range(1, 10)
.map(new Func1<Integer, Integer>() {
int sum;
#Override
public Integer call(Integer v) {
sum += v;
return sum;
}
});
});
o.subscribe(System.out::println);
o.subscribe(System.out::println);
Since the Func1 inner class will be created for each of the subscribe call, its sum field will be local to each individual consumer. Note also that sum is returned and auto-boxed into an immutable Integer which then can be freely read after in some other thread (think observeOn) as it is then completely detached of the sum field then on.
Mutable state and shared, mutable state often are required for useful work. The issue is how well we isolate the mutability from outside parties.
Creating an operator hides the mutability within the operator instance. The downside is that the state is private to the observable chain.
scan(), reduce() and fold() (if it existed) would be good candidates, but they have very limited implementations, export their state in non-obvious ways and are also limited to the observable chain they are attached to.
Subject or Relay objects provide useful cut-out points.
Going back to basics, using a privately accessible data structure in thread-safe ways is not a bad thing. If you are only concerned about the one observer chain, then either of options 1 or 3 will do the job readily.
I started to play around with RxJava and ReactFX, and I became pretty fascinated with it. But as I'm experimenting I have dozens of questions and I'm constantly researching for answers.
One thing I'm observing (no pun intended) is of course lazy execution. With my exploratory code below, I noticed nothing gets executed until the merge.subscribe(pet -> System.out.println(pet)) is called. But what fascinated me is when I subscribed a second subscriber merge.subscribe(pet -> System.out.println("Feed " + pet)), it fired the "iteration" again.
What I'm trying to understand is the behavior of the iteration. It does not seem to behave like a Java 8 stream that can only be used once. Is it literally going through each String one at a time and posting it as the value for that moment? And do any new subscribers following any previously fired subscribers receive those items as if they were new?
public class RxTest {
public static void main(String[] args) {
Observable<String> dogs = Observable.from(ImmutableList.of("Dasher", "Rex"))
.filter(dog -> dog.matches("D.*"));
Observable<String> cats = Observable.from(ImmutableList.of("Tabby", "Grumpy Cat", "Meowmers", "Peanut"));
Observable<String> ferrets = Observable.from(CompletableFuture.supplyAsync(() -> "Harvey"));
Observable<String> merge = dogs.mergeWith(cats).mergeWith(ferrets);
merge.subscribe(pet -> System.out.println(pet));
merge.subscribe(pet -> System.out.println("Feed " + pet));
}
}
Observable<T> represents a monad, a chained operation, not the execution of the operation itself. It is descriptive language, rather than the imperative you're used to. To execute an operation, you .subscribe() to it. Every time you subscribe a new execution stream is created from scratch. Do not confuse streams with threads, as subscription are executed synchronously unless you specify a thread change with .subscribeOn() or .observeOn(). You chain new elements to any existing operation/monad/Observable to add new behaviour, like changing threads, filtering, accumulation, transformation, etc. In case your observable is an expensive operation you don't want to repeat on every subscription, you can prevent recreation by using .cache().
To make any asynchronous/synchronous Observable<T> operation into a synchronous inlined one, use .toBlocking() to change its type to BlockingObservable<T>. Instead of .subscribe() it contains new methods to execute operations on each result with .forEach(), or coerce with .first()
Observables are a good tool because they're mostly* deterministic (same inputs always yield same outputs unless you're doing something wrong), reusable (you can send them around as part of a command/policy pattern) and for the most part ignore concurrence because they should not rely on shared state (a.k.a. doing something wrong). BlockingObservables are good if you're trying to bring an observable-based library into imperative language, or just executing an operation on an Observable that you have 100% confidence it's well managed.
Architecting your application around these principles is a change of paradigm that I can't really cover on this answer.
*There are breaches like Subject and Observable.create() that are needed to integrate with imperative frameworks.
I'm looking for a Java pattern for making a nested sequence of non-blocking method calls. In my case, some client code needs to asynchronously invoke a service to perform some use case, and each step of that use case must itself be performed asynchronously (for reasons outside the scope of this question). Imagine I have existing interfaces as follows:
public interface Request {}
public interface Response {}
public interface Callback<R extends Response> {
void onSuccess(R response);
void onError(Exception e);
}
There are various paired implementations of the Request and Response interfaces, namely RequestA + ResponseA (given by the client), RequestB + ResponseB (used internally by the service), etc.
The processing flow looks like this:
In between the receipt of each response and the sending of the next request, some additional processing needs to happen (e.g. based on values in any of the previous requests or responses).
So far I've tried two approaches to coding this in Java:
anonymous classes: gets ugly quickly because of the required nesting
inner classes: neater than the above, but still hard for another developer to comprehend the flow of execution
Is there some pattern to make this code more readable? For example, could I express the service method as a list of self-contained operations that are executed in sequence by some framework class that takes care of the nesting?
Since the implementation (not only the interface) must not block, I like your list idea.
Set up a list of "operations" (perhaps Futures?), for which the setup should be pretty clear and readable. Then upon receiving each response, the next operation should be invoked.
With a little imagination, this sounds like the chain of responsibility. Here's some pseudocode for what I'm imagining:
public void setup() {
this.operations.add(new Operation(new RequestA(), new CallbackA()));
this.operations.add(new Operation(new RequestB(), new CallbackB()));
this.operations.add(new Operation(new RequestC(), new CallbackC()));
this.operations.add(new Operation(new RequestD(), new CallbackD()));
startNextOperation();
}
private void startNextOperation() {
if ( this.operations.isEmpty() ) { reportAllOperationsComplete(); }
Operation op = this.operations.remove(0);
op.request.go( op.callback );
}
private class CallbackA implements Callback<Boolean> {
public void onSuccess(Boolean response) {
// store response? etc?
startNextOperation();
}
}
...
In my opinion, the most natural way to model this kind of problem is with Future<V>.
So instead of using a callback, just return a "thunk": a Future<Response> that represents the response that will be available at some point in the future.
Then you can either model subsequent steps as things like Future<ResponseB> step2(Future<ResponseA>), or use ListenableFuture<V> from Guava. Then you can use Futures.transform() or one of its overloads to chain your functions in a natural way, but while still preserving the asynchronous nature.
If used in this way, Future<V> behaves like a monad (in fact, I think it may qualify as one, although I'm not sure off the top of my head), and so the whole process feels a bit like IO in Haskell as performed via the IO monad.
You can use actor computing model. In your case, the client, services, and callbacks [B-D] all can be represented as actors.
There are many actor libraries for java. Most of them, however, are heavyweight, so I wrote a compact and extendable one: df4j. It considers actor model as a specific case of more general dataflow computing model and, as a result, allows user to create new types of actors, to optimally fit user's requirements.
I am not sure if I get you question correctly. If you want to invoke a service and on its completion result need to be passed to other object which can continue processing using result. You can look at using Composite and Observer to achive this.