Spring WebFlux + ReactiveMongoDB, Can't save entity - java

I expect that after the execution of the program, Rubber will be saved in the mongo. The result is 200 O.K, but nothing was saved to the database, I suspect that the problem is in the doOnSuccess method. How to use it? Or what else could be the problem?
#PostMapping
public Mono<Rubber> create(#RequestBody Rubber rubber) {
return rubberService.create(rubber);
}
#Override
public Mono<Rubber> create(Rubber rubber) {
return Mono.just(rubber)
.map(rubberToRubberEntityConverter::convert)
.doOnSuccess(rubberRepository::save)
.doOnError((throwable) -> Mono.error(new ApplicationException("Can't create ruber :( ", throwable)))
.map(rubberEntityToRubberConverter::convert);
}
#Repository
public interface RubberRepository extends ReactiveMongoRepository<RubberEntity, String> {
}

Your reactive chain isn't set up correctly:
return Mono.just(rubber)
.map(rubberToRubberEntityConverter::convert)
.doOnSuccess(rubberRepository::save)
You're not actually doing anything reactive here - you're taking a value, wrapping it in a Mono, converting it (synchronously), then performing a side-effect (also synchronously.) In this case, your side-effect is simply setting up the reactive chain to save to the repository (which will return a Mono), but since that Mono is never subscribed to, the save never actually occurs.
Your doOnError() call has a similar issue - you're again returning a Mono rather than performing a side-effect. Instead, you almost certainly want to use onErrorMap() to convert between one error and another.
In short, any time you use doOnSuccess(), doOnError() etc. and use a method that returns a publisher of some description, it's almost always going to be the wrong thing to do. Using Mono.just() is also a hint that you're not starting with a reactive chain - not necessarily wrong in and of itself, but it can be a warning sign that you're not actually creating a "real" reactive chain.
Instead, you probably want something like:
return rubberRepository.save(rubberToRubberEntityConverter.convert(rubber))
.onErrorMap((throwable) -> new ApplicationException("Can't create rubber :( ", throwable))
.map(rubberEntityToRubberConverter::convert);

Related

Spring Webflux Proper Way To Find and Save

I created the below method to find an Analysis object, update the results field on it and then lastly save the result in the database but not wait for a return.
public void updateAnalysisWithResults(String uuidString, String results) {
findByUUID(uuidString).subscribe(analysis -> {
analysis.setResults(results);
computeSCARepository.save(analysis).subscribe();
});
}
This feels poorly written to subscribe within a subscribe.
Is this a bad practice?
Is there a better way to write this?
UPDATE:
entry point
#PatchMapping("compute/{uuid}/results")
public Mono<Void> patchAnalysisWithResults(#PathVariable String uuid, #RequestBody String results) {
return computeSCAService.updateAnalysisWithResults(uuid,results);
}
public Mono<Void> updateAnalysisWithResults(String uuidString, String results) {
// findByUUID(uuidString).subscribe(analysis -> {
// analysis.setResults(results);
// computeSCARepository.save(analysis).subscribe();
// });
return findByUUID(uuidString)
.doOnNext(analysis -> analysis.setResults(results))
.doOnNext(computeSCARepository::save)
.then();
}
Why it is not working is because you have misunderstood what doOnNext does.
Lets start from the beginning.
A Flux or Mono are producers, they produce items. Your application produces things to the calling client, hence it should always return either a Mono or a Flux. If you don't want to return anything you should return a Mono<Void>.
When the client subscribes to your application what reactor will do is call all operators in the opposite direction until it finds a producer. This is what is called the assembly phase. If all your operators don't chain together you are what i call breaking the reactive chain.
When you break the chain, the things broken from the chain wont be executed.
If we look at your example but in a more exploded version:
#Test
void brokenChainTest() {
updateAnalysisWithResults("12345", "Foo").subscribe();
}
public Mono<Void> updateAnalysisWithResults(String uuidString, String results) {
return findByUUID(uuidString)
.doOnNext(analysis -> analysis.setValue(results))
.doOnNext(this::save)
.then();
}
private Mono<Data> save(Data data) {
return Mono.fromCallable(() -> {
System.out.println("Will not print");
return data;
});
}
private Mono<Data> findByUUID(String uuidString) {
return Mono.just(new Data());
}
private static class Data {
private String value;
public void setValue(String value) {
this.value = value;
}
}
in the above example save is a callable function that will return a producer. But if we run the above function you will notice that the print will never be executed.
This has to do with the usage of doOnNext. If we read the docs for it it says:
Add behavior triggered when the Mono emits a data successfully.
The Consumer is executed first, then the onNext signal is propagated downstream.
doOnNext takes a Consumer that returns void. And if we look at doOnNext we see that the function description looks as follows:
public final Mono<T> doOnNext(Consumer<? super T> onNext)`
THis means that it takes in a consumer that is a T or extends a T and it returns a Mono<T>. So to keep a long explanation short, you can see that it consumes something but also returns the same something.
What this means is that this usually used for what is called side effects basically for something that is done on the side that does not hinder the current flow. One of those things could for instance logging. Logging is one of those things that would consume for instance a string and log it, while we want to keep the string flowing down our program. Or maybe we we want to increment a number on the side. Or modify some state somewhere. You can read all about side effects here.
you can of think of it visually this way:
_____ side effect (for instance logging)
/
___/______ main reactive flow
That's why your first doOnNext setter works, because you are modifying a state on the side, you are setting the value on your class hence modifying the state of your class to have a value.
The second statement on the other hand, the save, does not get executed. You see that function is actually returning something we need to take care of.
This is what it looks like:
save
_____
/ \ < Broken return
___/ ____ no main reactive flow
all we have to do is actually change one single line:
// From
.doOnNext(this::save)
// To
.flatMap(this::save)
flatMap takes whatever is in the Mono, and then we can use that to execute something and then return a "new" something.
So our flow (with flatMap) now looks like this:
setValue() save()
______ _____
/ / \
__/____________/ \______ return to client
So with the use of flatMap we are now saving and returning whatever was returned from that function triggering the rest of the chain.
If you then choose to ignore whatever is returned from the flatMap its completely correct to do as you have done to call then which will
Return a Mono which only replays complete and error signals from this
The general rule is, in a fully reactive application, you should never block.
And you generally don't subscribe unless your application is the final consumer. Which means if your application started the request, then you are the consumerof something else so you subscribe. If a webpage starts off the request, then they are the final consumer and they are subscribing.
If you are subscribing in your application that is producing data its like you are running a bakery and eating your baked breads at the same time.
don't do that, its bad for business :D
Subscribe inside a subscribe is not a good practise. You can use flatMap operator to solve this problem.
public void updateAnalysisWithResults(String uuidString, String results) {
findByUUID(uuidString).flatMap(analysis -> {
analysis.setResults(results);
return computeSCARepository.save(analysis);
}).subscribe();
}

Returning single or multiple response in loop operations?

In my Java app, I have the following service method that calls another method and accumulate responses. Then returns these responses as a list. If there is not any exception, it works properly. However, it is possible to encounter exception for one of the call in the loop. In that case, it cannot return the previous responses retrieved until exception (if there are 10 process in the loop and there is an exception for the 6th process, then it cannot return the previous 5 responses added to the response list).
public List<CommandResponse> process(final UUID uuid) {
final Site site = siteRepository.findByUuid(uuid)
.orElseThrow(() -> new EntityNotFoundException(SITE_ENTITY_NAME));
// code omitted for brevity
for (Type providerType : providerTypeList) {
// operations
responses.add(demoService.demoMethod());
}
return responses;
}
Under these conditions, I am wondering if I should use a try-catch mechanism or should I return response in the loop and finally return null. What would you suggest for this situations?
public CommandResponse operation(final UUID uuid) {
final Site site = siteRepository.findByUuid(uuid)
.orElseThrow(() -> new EntityNotFoundException(SITE_ENTITY_NAME));
// code omitted for brevity
for (Type providerType : providerTypeList) {
// operations
return demoService.demoMethod();
}
return null;
}
Well, following the best practices the method demoMethod() should not throw exception, instead capture the exception and send it as response.
This implies either CommandResponse can hold exception response. Following this the code looks as follows:
class CommandResponse<T>{
public T errorResponse();
public T successResponse();
public boolean isSucces();
}
And then later while rendering response you can handle failures/exceptions as per use case.
OR
another way to handle this is having an interface Response with two implementations one for Success & another for failure. Thus making method process to return List<Response>.
It all depends on the requirements, the contract between your process() method and its callers.
I can imagine two different styles of contract:
All Or Nothing: the caller needs the complete responses list, and can't sensibly proceed if some partial response is missing. I'd recommend to throw an exception in case of an error. Typically, this is the straightforward approach, and applies to many real-world situations (and the reason why the concept of exceptions was introduced).
Partial Results: the caller wants to get as much of the complete results list as currently possible (plus the information which parts are missing?). Return a data structure consisting of partial results plus error descriptions. This places an additional burden on the caller (extracting reults from a structure instead of directly getting them, having to explicitly deal with error messages etc.), so I'd only go that way if there is a convincing use case.
Both contracts can be valid. Only you know which one matches your situation. So, choose the right one, and document the decision.

Converting portable, imperative code to reactive without resorting to blocking?

I have some legacy imperative code for saving & loading objects by key in multiple document datastores. Essentially, it's written portably so the DatastoreClient knows nothing about the data it's storing, but is given a key by the repository using it for predictable retrieval. What would be the best way to make that pattern reactive?
Legacy code is of the form
public class CustomerRepository implements CrudRepository<Customer, Long> {
private final DatastoreClient datastoreClient;
private final ObjectMapper mapper = new ObjectMapper(); //jackson.fasterxml
public CustomerRepository(final DatastoreClient datastoreClient) {
this.datastoreClient = datastoreClient;
}
public void createOrUpdate(Customer c) {
datatoreClient.makeAndStoreDocument(mapper.convertValue(c, Map.class),
c.getId());
}
}
I've managed to rewrite datatoreClient.createOrUpdate(...) to use the Project Reactor types Mono<Map<String,Object>> and Mono<Long|String>, but what is the right way to get the object and key to this method reactively? Or is the better answer to start from scratch on the interfaces?
public Mono<Void> createOrUpdateReactive(final Mono<Customer> customerMono) {
return customerMono.flatMap(customer -> datastoreClient
.makeAndStoreDocument(
Mono.just(mapper.convertValue(customer, Map.class)),
Mono.just(customer.getId())
)
);
}
Doesn't this end up blocking to unpack the real data out of the first Mono?
I added underlying DatastoreClient makeAndStoreDocument function
public class GoogleFirestoreClient implements DatastoreClient {
#Override
public Mono<Void> makeAndStoreDocument(final Mono<Map<String, Object>> model, final Mono<String> key) {
//The client library for Firestore is synchronous/blocking, so we offload the actual request to a separate, elastic thread pool.
//When the result comes back, a separate, asynchronously generated result goes back up the chain.
return Mono.zip(model, key)
.publishOn(Schedulers.boundedElastic())
.doOnNext(tuple -> db.collection(collectionName).document(tuple.getT2()).set(tuple.getT1()))
.retryWhen(Retry.max(3).filter(error -> error instanceof InterruptedException))
.doOnSuccess(tuple -> System.out.println("Wrote object: " + tuple.getT1() + " to Firestore collection " + collectionName))
.doOnError(ExecutionException.class, ee -> logger.error("ExecutionException in createOrUpdateReactive. ", ee))
.doOnError(InterruptedException.class, ie -> logger.error("Reactive CreateOrUpdate interrupted more than limit allows.", ie))
.then();
}
}
but what is the right way to "split" my Mono in the caller?
There is no right answer, you design your API the way you want. As long as you don't call block or in this specific case call subscribe then you can solve this however works best for you in accordance to your teams decision in designing the API to the database.
How to design API's are out of scope for this question, and is extremely opinion based. What I can suggest in this case is looking into the single responsibility principal which means one things does one thing and it does it really good.
makeAndStoreDocument does two things (hence the name), which is not inherently wrong, but can for instance be harder to test, since you need to test for two things in one single thing (what if you need to change one thing but not the other, then tests need to be rewritten and can build up complexity).
But now we are in opinion based territory and Stack Overflow is not the site for such discussions, there are better sites for that purpose.
Software Engineering
Code review

Subscribe to an Observable without triggering it and then passing it on

This could get a little bit complicated and I'm not that experienced with Observables and the RX pattern so bear with me:
Suppose you've got some arbitrary SDK method which returns an Observable. You consume the method from a class which is - among other things - responsible for retrieving data and, while doing so, does some caching, so let's call it DataProvider. Then you've got another class which wants to access the data provided by DataProvider. Let's call it Consumer for now. So there we've got our setup.
Side note for all the pattern friends out there: I'm aware that this is not MVP, it's just an example for an analogous, but much more complex problem I'm facing in my application.
That being said, in Kotlin-like pseudo code the described situation would look like this:
class Consumer(val provider: DataProvider) {
fun logic() {
provider.getData().subscribe(...)
}
}
class DataProvider(val sdk: SDK) {
fun getData(): Consumer {
val observable = sdk.getData()
observable.subscribe(/*cache data as it passes through*/)
return observable
}
}
class SDK {
fun getData(): Observable {
return fetchDataFromNetwork()
}
}
The problem is, that upon calling sdk.subscribe() in the DataProvider I'm already triggering the Observable's subscribe() method which I don't want. I want the DataProvider to just silently listen - in this example the triggering should be done by the Consumer.
So what's the best RX compatible solution for this problem? The one outlined in the pseudo code above definitely isn't for various reasons one of which is the premature triggering of the network request before the Consumer has subscribed to the Observable. I've experimented with publish().autoComplete(2) before calling subscribe() in the DataProvider, but that doesn't seem to be the canonical way to do this kind of things. It just feels hacky.
Edit: Through SO's excellent "related" feature I've just stumbled across another question pointing in a different direction, but having a solution which could also be applicable here namely flatMap(). I knew that one before, but never actually had to use it. Seems like a viable way to me - what's your opinion regarding that?
If the caching step is not supposed to modify events in the chain, the doOnNext() operator can be used:
class DataProvider(val sdk: SDK) {
fun getData(): Observable<*> = sdk.getData().doOnNext(/*cache data as it passes through*/)
}
Yes, flatMap could be a solution. Moreover you could split your stream into chain of small Observables:
public class DataProvider {
private Api api;
private Parser parser;
private Cache cache;
public Observable<List<User>> getUsers() {
return api.getUsersFromNetwork()
.flatMap(parser::parseUsers)
.map(cache::cacheUsers);
}
}
public class Api {
public Observable<Response> getUsersFromNetwork() {
//makes https request or whatever
}
}
public class Parser {
public Observable<List<User>> parseUsers(Response response) {
//parse users
}
}
public class Cache {
public List<User> cacheUsers(List<User> users) {
//cache users
}
}
It's easy to test, maintain and replace implementations(with usage of interfaces). Also you could easily insert additional step into your stream(for instance log/convert/change data which you receive from server).
The other quite convenient operator is map. Basically instead of Observable<Data> it returns just Data. It could make your code even simpler.

Spring integration flow: perform task within flow

hopefully this one is the last question I'm asking on spring integration.
Faced following problem: at the end of pretty long IntegrationFlow dsl sheet there is a code:
return IntegrationFlows.
//...
.enrichHeaders(headerEnricherSpec -> headerEnricherSpec.header("jms_replyTo", responseQueue(), true)) // IntegrationMessageHeaderAccessor.CORRELATION_ID is not acceptable though message came from outgoingGateway of another application with this header been set
.handle(requestRepository::save)
.handle(
Jms.outboundAdapter(queueConnectionFactory()).destination(serverQueue())
)
.get();
The problem is that after some code like requestRepository::save handlers chain becomes broken. This trick only works if there is a gateway passed in as a handler parameter.
How can I overcome this limitation? I think that utilizing wireTap here will not make a deal because it is asynchromous. Here, actually, I save message to store it's jms_replyTo header and replace it with saved one after corresponding reply comes back from server (Smart Proxy enterprise integration pattern).
Any suggestions please?
Not sure why do you say "the last question". Are you going to give up on Spring Integration? :-(
I guess your problem there with the next .handle() because your requestRepository::save is one-way MessageHandler (the void return from the save() method). Or your save() returns null.
The IntegrationFlow is a chain of executions when the next one will be called after the previous with its non-null result.
So, share your requestRepository::save, please!
UPDATE
Neither did help declaring MessageHandler bean as a (m) -> requestRepository.save(m) and passing it into handle(..) method as a param.
Yeah... I would like to see the signature for your requestRepository::save.
So, look. Using method reference for the .handle() you should determine your scenario. If you do one-way handle with the flow stop, there is enough to follow with the org.springframework.messaging.MessageHandler contract. with that you method signature should be like this:
public void myHandle(Message<?> message)
If you'd like to go ahead with the flow you should return anything from your service method. And that result becomes the payload for the next .handle().
In this case your method should follow with the org.springframework.integration.dsl.support.GenericHandler contract. Your method signature may look like:
public Bar myHandle(Foo foo, Map<String, Object> headers)
That's how the method reference works for the .handle().
You should understand how this method chain style works. The output of the previous method is an input of the next. In this our case we protect the flow from the dead code, like MessageHandler with void return, but there is the next flow member. That's why you see This is the end of the integration flow. error.
Finally came up with solution:
#Bean
public MessageHandler requestPersistingHandler() {
return new ServiceActivatingHandler(message -> {
requestRepository.save(message);
return message.getPayload();
});
}
//...
#Bean
public IntegrationFlow requestFlow() {
return IntegrationFlows.from(
Jms.messageDrivenChannelAdapter(queueConnectionFactory()).destination(bookingQueue())
)
.wireTap(controlBusMessageChannel())
.enrichHeaders(headerEnricherSpec -> headerEnricherSpec.header(JMS_REPLY_HEADER, responseQueue(), true))
.handle(requestPersistingHandler())
.handle(
Jms.outboundAdapter(queueConnectionFactory()).destination(serverQueue())
)
.get();
}
I'm just not sure if there is some more straight way.
The only issue left is how to change header in "from-the-server" IntegrationFlow within enrichHeaders method: have no idea how to access existing headers with specification.

Categories