Spring Webflux Proper Way To Find and Save - java

I created the below method to find an Analysis object, update the results field on it and then lastly save the result in the database but not wait for a return.
public void updateAnalysisWithResults(String uuidString, String results) {
findByUUID(uuidString).subscribe(analysis -> {
analysis.setResults(results);
computeSCARepository.save(analysis).subscribe();
});
}
This feels poorly written to subscribe within a subscribe.
Is this a bad practice?
Is there a better way to write this?
UPDATE:
entry point
#PatchMapping("compute/{uuid}/results")
public Mono<Void> patchAnalysisWithResults(#PathVariable String uuid, #RequestBody String results) {
return computeSCAService.updateAnalysisWithResults(uuid,results);
}
public Mono<Void> updateAnalysisWithResults(String uuidString, String results) {
// findByUUID(uuidString).subscribe(analysis -> {
// analysis.setResults(results);
// computeSCARepository.save(analysis).subscribe();
// });
return findByUUID(uuidString)
.doOnNext(analysis -> analysis.setResults(results))
.doOnNext(computeSCARepository::save)
.then();
}

Why it is not working is because you have misunderstood what doOnNext does.
Lets start from the beginning.
A Flux or Mono are producers, they produce items. Your application produces things to the calling client, hence it should always return either a Mono or a Flux. If you don't want to return anything you should return a Mono<Void>.
When the client subscribes to your application what reactor will do is call all operators in the opposite direction until it finds a producer. This is what is called the assembly phase. If all your operators don't chain together you are what i call breaking the reactive chain.
When you break the chain, the things broken from the chain wont be executed.
If we look at your example but in a more exploded version:
#Test
void brokenChainTest() {
updateAnalysisWithResults("12345", "Foo").subscribe();
}
public Mono<Void> updateAnalysisWithResults(String uuidString, String results) {
return findByUUID(uuidString)
.doOnNext(analysis -> analysis.setValue(results))
.doOnNext(this::save)
.then();
}
private Mono<Data> save(Data data) {
return Mono.fromCallable(() -> {
System.out.println("Will not print");
return data;
});
}
private Mono<Data> findByUUID(String uuidString) {
return Mono.just(new Data());
}
private static class Data {
private String value;
public void setValue(String value) {
this.value = value;
}
}
in the above example save is a callable function that will return a producer. But if we run the above function you will notice that the print will never be executed.
This has to do with the usage of doOnNext. If we read the docs for it it says:
Add behavior triggered when the Mono emits a data successfully.
The Consumer is executed first, then the onNext signal is propagated downstream.
doOnNext takes a Consumer that returns void. And if we look at doOnNext we see that the function description looks as follows:
public final Mono<T> doOnNext(Consumer<? super T> onNext)`
THis means that it takes in a consumer that is a T or extends a T and it returns a Mono<T>. So to keep a long explanation short, you can see that it consumes something but also returns the same something.
What this means is that this usually used for what is called side effects basically for something that is done on the side that does not hinder the current flow. One of those things could for instance logging. Logging is one of those things that would consume for instance a string and log it, while we want to keep the string flowing down our program. Or maybe we we want to increment a number on the side. Or modify some state somewhere. You can read all about side effects here.
you can of think of it visually this way:
_____ side effect (for instance logging)
/
___/______ main reactive flow
That's why your first doOnNext setter works, because you are modifying a state on the side, you are setting the value on your class hence modifying the state of your class to have a value.
The second statement on the other hand, the save, does not get executed. You see that function is actually returning something we need to take care of.
This is what it looks like:
save
_____
/ \ < Broken return
___/ ____ no main reactive flow
all we have to do is actually change one single line:
// From
.doOnNext(this::save)
// To
.flatMap(this::save)
flatMap takes whatever is in the Mono, and then we can use that to execute something and then return a "new" something.
So our flow (with flatMap) now looks like this:
setValue() save()
______ _____
/ / \
__/____________/ \______ return to client
So with the use of flatMap we are now saving and returning whatever was returned from that function triggering the rest of the chain.
If you then choose to ignore whatever is returned from the flatMap its completely correct to do as you have done to call then which will
Return a Mono which only replays complete and error signals from this
The general rule is, in a fully reactive application, you should never block.
And you generally don't subscribe unless your application is the final consumer. Which means if your application started the request, then you are the consumerof something else so you subscribe. If a webpage starts off the request, then they are the final consumer and they are subscribing.
If you are subscribing in your application that is producing data its like you are running a bakery and eating your baked breads at the same time.
don't do that, its bad for business :D

Subscribe inside a subscribe is not a good practise. You can use flatMap operator to solve this problem.
public void updateAnalysisWithResults(String uuidString, String results) {
findByUUID(uuidString).flatMap(analysis -> {
analysis.setResults(results);
return computeSCARepository.save(analysis);
}).subscribe();
}

Related

Difference between Flux.subscribe(Consumer<? super T> consumer>) and Flux.doOnNext(Consumer<? super T> onNext)

Just starting to understand reactive programming with Reactor and I've come across this code snippet from a tutorial here building-a-chat-application-with-angular-and-spring-reactive-websocket
class ChatSocketHandler(val mapper: ObjectMapper) : WebSocketHandler {
val sink = Sinks.replay<Message>(100);
val outputMessages: Flux<Message> = sink.asFlux();
override fun handle(session: WebSocketSession): Mono<Void> {
println("handling WebSocketSession...")
session.receive()
.map { it.payloadAsText }
.map { Message(id= UUID.randomUUID().toString(), body = it, sentAt = Instant.now()) }
.doOnNext { println(it) }
.subscribe(
{ message: Message -> sink.next(message) },
{ error: Throwable -> sink.error(error) }
);
return session.send(
Mono.delay(Duration.ofMillis(100))
.thenMany(outputMessages.map { session.textMessage(toJson(it)) })
)
}
fun toJson(message: Message): String = mapper.writeValueAsString(message)
}
I understand what it does but not why the author uses a consumer within the subscribe method vs chaining another doOnNext(consumer). ie. the lines:
.doOnNext { println(it) }
.subscribe(
{ message: Message -> sink.next(message) },
{ error: Throwable -> sink.error(error) }
From the Reactor documnetation I have read that the Flux.subscribe(Consumer <? super T> consumer):
Subscribe a Consumer to this Flux that will consume all the elements in the sequence. It will request an unbounded demand (Long.MAX_VALUE).
For a passive version that observe and forward incoming data see doOnNext(java.util.function.Consumer).
However from that I don't understand why one would choose one over the other, to me they seem functionally identical.
The difference is much more conventional rather than functional - the difference being side-effects vs a final consumer.
The doOnXXX series of methods are meant for user-designed side-effects as the reactive chain executes - logging being the most normal of these, but you may also have metrics, analytics, etc. that require a view into each element as it passes through. The key with all of these is that it doesn't make much sense to have any of these as a final consumer (such as the println() in your above example.)
On the contrary, the subscribe() consumers are meant to be a "final consumer", and usually called by your framework (such as Webflux) rather than by user code - so this case is a bit of an exception to that rule. In this case they're actively passing the messages in this reactive chain to another sink for further processing - so it doesn't make much sense to have this as a "side-effect" style method, as you wouldn't want the Flux to continue beyond this point.
(Addendum: As said above, the normal approach with reactor / Webflux is to let Webflux handle the subscription, which isn't what's happening here. I haven't looked in detail to see if there's a more sensible way to achieve this without a user subscription, but in my experience there usually is, and calling subscribe manually is usually a bit of a code smell as a result. You should certainly avoid it in your own code wherever you can.)

Spring WebFlux + ReactiveMongoDB, Can't save entity

I expect that after the execution of the program, Rubber will be saved in the mongo. The result is 200 O.K, but nothing was saved to the database, I suspect that the problem is in the doOnSuccess method. How to use it? Or what else could be the problem?
#PostMapping
public Mono<Rubber> create(#RequestBody Rubber rubber) {
return rubberService.create(rubber);
}
#Override
public Mono<Rubber> create(Rubber rubber) {
return Mono.just(rubber)
.map(rubberToRubberEntityConverter::convert)
.doOnSuccess(rubberRepository::save)
.doOnError((throwable) -> Mono.error(new ApplicationException("Can't create ruber :( ", throwable)))
.map(rubberEntityToRubberConverter::convert);
}
#Repository
public interface RubberRepository extends ReactiveMongoRepository<RubberEntity, String> {
}
Your reactive chain isn't set up correctly:
return Mono.just(rubber)
.map(rubberToRubberEntityConverter::convert)
.doOnSuccess(rubberRepository::save)
You're not actually doing anything reactive here - you're taking a value, wrapping it in a Mono, converting it (synchronously), then performing a side-effect (also synchronously.) In this case, your side-effect is simply setting up the reactive chain to save to the repository (which will return a Mono), but since that Mono is never subscribed to, the save never actually occurs.
Your doOnError() call has a similar issue - you're again returning a Mono rather than performing a side-effect. Instead, you almost certainly want to use onErrorMap() to convert between one error and another.
In short, any time you use doOnSuccess(), doOnError() etc. and use a method that returns a publisher of some description, it's almost always going to be the wrong thing to do. Using Mono.just() is also a hint that you're not starting with a reactive chain - not necessarily wrong in and of itself, but it can be a warning sign that you're not actually creating a "real" reactive chain.
Instead, you probably want something like:
return rubberRepository.save(rubberToRubberEntityConverter.convert(rubber))
.onErrorMap((throwable) -> new ApplicationException("Can't create rubber :( ", throwable))
.map(rubberEntityToRubberConverter::convert);

Converting Mono to Pojo without block

Is there a way to convert Mono objects to java Pojo?
I have a web client connecting to 3rd party REST service and instead of returning Mono I have to extract that object and interrogate it.
All the examples I have found return Mono<Pojo> but I have to get the Pojo itself. Currently, I am doing it by calling block() on Pojo but is there a better way to avoid block?
The issue with the block is that after few runs it starts throwing some error like block Terminated with error.
public MyPojo getPojo(){
return myWebClient.get()
.uri(generateUrl())
.headers(createHttpHeaders(headersMap))
.exchange()
.flatMap(evaluateResponseStatus())
.block();
}
private Function<ClientResponse, Mono<? extends MyPojo>> evaluateResponseStatus() {
return response -> {
if (response.statusCode() == HttpStatus.OK) {
return response.bodyToMono(MyPojo.class);
}
if (webClientUtils.isError(response.statusCode())) {
throw myHttpException(response);
// This invokes my exceptionAdvice
// but after few runs its ignored and 500 error is returned.
}
return Mono.empty();
};
}
It's not a good idea to block to operate on value in a reactive stream. Project Reactor offers you a selection of operators for you to handle the objects within a stream.
In your case, you can write getPojo() method like:
public Mono<MyPojo> getPojo() {
return myWebClient.get()
.uri(generateUrl())
.headers(createHttpHeaders(headersMap))
.retrieve()
.onStatus(status -> webClientUtils.isError(status),
response -> Mono.error(myHttpException(response))
.bodyToMono(MyPojo.class)
}
Note that using onStatus method, we replaced the whole evaluateResponseStatus method in your example.
You would use this method like the following:
// some method
...
getPojo()
.map(pojo -> /* do something with the pojo instance */)
...
I strongly advise you to look into Transforming an existing sequence in Project Reactor docs.
Since Webclient.block() is not recommended, the other way to retrieve the values from incoming httpresponse is to create a POJO in the calling application having the required fields. Then once the Mono is received, use Mono.subscribe(), within subscribe add a lambda function, with input say x, to retrieve the individual fields using the x.getters(). The values could be printed on console or assigned to a local var for further processing. This helps in two ways:-
Avoid the dreaded .block()
Keep the call Asynchronous when pulling large volumes of data.
This is one of many other ways to achieve the desired outcome.

Subscribe to an Observable without triggering it and then passing it on

This could get a little bit complicated and I'm not that experienced with Observables and the RX pattern so bear with me:
Suppose you've got some arbitrary SDK method which returns an Observable. You consume the method from a class which is - among other things - responsible for retrieving data and, while doing so, does some caching, so let's call it DataProvider. Then you've got another class which wants to access the data provided by DataProvider. Let's call it Consumer for now. So there we've got our setup.
Side note for all the pattern friends out there: I'm aware that this is not MVP, it's just an example for an analogous, but much more complex problem I'm facing in my application.
That being said, in Kotlin-like pseudo code the described situation would look like this:
class Consumer(val provider: DataProvider) {
fun logic() {
provider.getData().subscribe(...)
}
}
class DataProvider(val sdk: SDK) {
fun getData(): Consumer {
val observable = sdk.getData()
observable.subscribe(/*cache data as it passes through*/)
return observable
}
}
class SDK {
fun getData(): Observable {
return fetchDataFromNetwork()
}
}
The problem is, that upon calling sdk.subscribe() in the DataProvider I'm already triggering the Observable's subscribe() method which I don't want. I want the DataProvider to just silently listen - in this example the triggering should be done by the Consumer.
So what's the best RX compatible solution for this problem? The one outlined in the pseudo code above definitely isn't for various reasons one of which is the premature triggering of the network request before the Consumer has subscribed to the Observable. I've experimented with publish().autoComplete(2) before calling subscribe() in the DataProvider, but that doesn't seem to be the canonical way to do this kind of things. It just feels hacky.
Edit: Through SO's excellent "related" feature I've just stumbled across another question pointing in a different direction, but having a solution which could also be applicable here namely flatMap(). I knew that one before, but never actually had to use it. Seems like a viable way to me - what's your opinion regarding that?
If the caching step is not supposed to modify events in the chain, the doOnNext() operator can be used:
class DataProvider(val sdk: SDK) {
fun getData(): Observable<*> = sdk.getData().doOnNext(/*cache data as it passes through*/)
}
Yes, flatMap could be a solution. Moreover you could split your stream into chain of small Observables:
public class DataProvider {
private Api api;
private Parser parser;
private Cache cache;
public Observable<List<User>> getUsers() {
return api.getUsersFromNetwork()
.flatMap(parser::parseUsers)
.map(cache::cacheUsers);
}
}
public class Api {
public Observable<Response> getUsersFromNetwork() {
//makes https request or whatever
}
}
public class Parser {
public Observable<List<User>> parseUsers(Response response) {
//parse users
}
}
public class Cache {
public List<User> cacheUsers(List<User> users) {
//cache users
}
}
It's easy to test, maintain and replace implementations(with usage of interfaces). Also you could easily insert additional step into your stream(for instance log/convert/change data which you receive from server).
The other quite convenient operator is map. Basically instead of Observable<Data> it returns just Data. It could make your code even simpler.

Observable's doOnError correct location

I am kind of new to Observers, and I am still trying to figure them out. I have the following piece of code:
observableKafka.getRealTimeEvents()
.filter(this::isTrackedAccount)
.filter(e -> LedgerMapper.isDepositOrClosedTrade((Transaction) e.getPayload()))
.map(ledgerMapper::mapLedgerTransaction)
.map(offerCache::addTransaction)
.filter(offer -> offer != null) // Offer may have been removed from cache since last check
.filter(Offer::isReady)
.doOnError(throwable -> {
LOG.info("Exception thrown on realtime events");
})
.forEach(awardChecker::awardFailOrIgnore);
getRealTimeEvents() returns an Observable<Event>.
Does the location of .doOnError matters? Also, what is the effect of adding more than one call to it in this piece of code? I have realised I can do it and all of them get invoked, but I am not sure of what could be its purpose.
Yes, it does. doOnError acts when an error is passing through the stream at that specific point, so if the operator(s) before doOnError throw(s), your action will be called. However, if you place the doOnError further up, it may or may not be called depending on what downstream operators are in the chain.
Given
Observer<Object> ignore = new Observer<Object>() {
#Override public void onCompleted() {
}
#Override public void onError(Throwable e) {
}
#Override public void onNext(Object t) {
}
};
For example, the following code will always call doOnError:
Observable.<Object>error(new Exception()).doOnError(e -> log(e)).subscribe(ignore);
However, this code won't:
Observable.just(1).doOnError(e -> log(e))
.flatMap(v -> Observable.<Integer>error(new Exception())).subscribe(ignore);
Most operators will bounce back exceptions that originate downstream.
Adding multipe doOnError is viable if you transform an exception via onErrorResumeNext or onExceptionResumeNext:
Observable.<Object>error(new RuntimeException())
.doOnError(e -> log(e))
.onErrorResumeNext(Observable.<Object>error(new IllegalStateException()))
.doOnError(e -> log(e)).subscribe(ignore);
otherwise, you'd log the same exception at multiple locations of the chain.
the doOn??? methods are there for side-effects, processing that doesn't really is your core business value let's say. Logging is a perfectly fine use for that.
That said, sometimes you want to do something more meaningful with an error, like retrying, or displaying a message to a user, etc... For these cases, the "rx" way would be to process the error in a subscribe call.
doOnError (and the other doOn methods) wraps the original Observable into a new one and adds behavior to it (around its onError method, obviously). That's why you can call it many times. Also one benefit of being able to call it anywhere in the chain is that you can access errors that would otherwise be hidden from the consumer of the stream (the Subscriber), for instance because there's a retry down in the chain...

Categories