Nested repo calls Spring Mono - java

I am getting started with Spring Flux
When i try to make a call to the repo based on the repsonse of first call. The call to the repository is not made.
Looking up DocParts in a repo by docpartId and just building a new Docpart and writing over existing for now for test.
(Code for db handling in repo is removed for simplicity as the return values are the same as in this example)
The newly created docParts get flatMapped and should be upserted into the database but the upsert call is never made as I have debugged it and also cosnole logged from it.
First call to repo is made but second call is never made
IE it gets to
return docPartRepo.getDocPartById(docPartId)
.map(docPart-> DocPart.builder())
.build())
.flatMap(docPart -> docPartRepo.upsert(docPartId, docPart) // This upsert is never called
.doOnSuccess(xx -> LOGGER.info("Updated docRepo with docPart")));
Repo Code example
public Mono<DocPart> getDocPartById(Long docPartId) {
return Mono.just(DocPart.builder().
.build())
}
public Mono<Void> upsert(Long id, DocPart) {
System.out.println("In Upsert")
//Make db call
return Mono.empty();
}
If I test the code to just call upsert it is made OK but the doSuccess is not called
final Mono<DocPart> docPart = docPartRepo.getDocPartById(docPartId)
.map(docPart-> DocPart.builder())
.build())
final DocPart docPart = docPart.block();
docPartRepo.upsert(docPartId, docPart)
.doOnSuccess(xxx -> System.out.println("Updated db")); // Now the upsert is made but not doOnSuccess
Not sure if it is connected to calling the same repo nested or if anybody can give som inputto what might be hindering the call to upsert in flatmap not to be made

Related

Flux.zip returns same results every time, instead of running query for each "zipped" tuple

I'm relatively new to Webflux and am trying to zip two objects together in order to return a tuple, which is then being used to build an object in the following step.
I'm doing it like so:
//Will shorten my code quite a bit.
//I'm including only the pieces that are invovled in my problem //"Flux.zip" call.
//This is a repository that is used in my "problem" code. It is simply an
//interface which extends ReactiveCrudRepository from spring data.
MyRepository repo;
//wiring in my repository...
public MyClass(MyRepository repo) {
this.repo = repo;
}
//Below is later in my reactive chain
//Starting halfway down the chain, we have Flux of objA
(flatMapMany returning Flux<ObjectA>)
//Problem code below...
//Some context :: I am zipping ObjectA with a reference to an object
//I am saving. I am saving an object from earlier, which is stored in an
//AtomicReference<ObjectB>
.flatMap(obj ->
Flux.zip(Mono.just(obj), repo.save(atomicReferenceFromEarlier.get()))
//Below, when calling "getId()" it logs the SAME ID FOR EACH OBJECT,
//even though I want it to return EACH OBJECT'S ID THAT WAS SAVED.
.map(myTuple2 -> log("I've saved this object {} ::" myTuple2.getT2().getId())))
//Further processing....
So, my ultimate issue is, the "second" parameter I'm zipping, the repo.save(someAtomicReferencedObject.get()) is the same for every "zipped" tuple.
in the following step, I'm logging something like "I'm now building object", just to see what object I've returned for each event, but my "second" object in my tuple is always the same...
How can I zip and ensure that the "save" call to the repo returns a unique object for each event in the flux?
However, when I check the database, I really have saved unique entities for each event in my flux. So, the save is happening as expected, but when the repo returns a Mono, it's the same one for each tuple returned.
Please let me know if I should clarify something if anything is unclear. Thank you in advance for any and all help.

Spring Boot Webclient - Merge

I want to merge 2 responses and return a Flux.
private Flux<Response<List<Company>, Error>> loopGet(List<Entity> registries, Boolean status) {
return Flux.fromIterable(registries)
.flatMap(this::sendGetRequest)
.mergeWith(Mono.just(fetch(status)));
}
This is what I am doing, is working but I would like the merge to wait before calling the Mono.just (fetch (status)).
I'll explain, sendGetRequest returns a Mono that makes an API call and from the result saves things to db. Subsequently the merge goes to call the db with the fetch method, but that data is not updated yet. If I then make the call again, I get the updated data.
You need concatWith and fromCallable to ensure that fetch is called lazily after the get requests are finished.
private Flux<Response<List<Company>, Error>> loopGet(List<Entity> registries, Boolean status) {
return Flux.fromIterable(registries)
.flatMap(this::sendGetRequest)
.concatWith(Mono.fromCallable(() -> fetch(status)));
}

Spring #Transactional not working with CompletableFuture [duplicate]

Idea
I have a processing method which takes in a list of items and processes them asynchronously using external web service. The process steps also persist data while processing. At the end of whole process, I want to persist the whole process along with each processed results as well.
Problem
I convert each item in the list into CompletableFuture and run a processing task on them, and put them back into an array of futures. Now using its .ofAll method (in sequence method) to complete future when all the submitted tasks are completed and return another CompletableFuture which holds the result.
When I want to get that result, I call .whenComplete(..), and would want to set the returned result into my entity as data, and then persist to the database, however the repository save call just does nothing and continues threads just continue running, it's not going past the repository save call.
#Transactional
public void process(List<Item> items) {
List<Item> savedItems = itemRepository.save(items);
final Process process = createNewProcess();
final List<CompletableFuture<ProcessData>> futures = savedItems.stream()
.map(item -> CompletableFuture.supplyAsync(() -> doProcess(item, process), executor))
.collect(Collectors.toList());
sequence(futures).whenComplete((data, throwable) -> {
process.setData(data);
processRepository.save(process); // <-- transaction lost?
log.debug("Process DONE"); // <-- never reached
});
}
Sequence method
private static <T> CompletableFuture<List<T>> sequence(List<CompletableFuture<T>> futures) {
CompletableFuture<Void> allDoneFuture =
CompletableFuture.allOf(futures.toArray(new CompletableFuture[futures.size()]));
return allDoneFuture.thenApply(v ->
futures.stream().map(CompletableFuture::join).collect(Collectors.toList())
);
}
What is happening? Why is the persist call not passing. Is the thread that started the transaction not able to commit the transaction or where does it get lost? All the processed data returns fine and is all good. I've tried different transaction strategies, but how is it possible to control which thread is gonna finish the transaction, if it's the case?
Any advice?
The reason of your problem is, as said above, that the transaction ends
when the return of method process(..) is reached.
What you can do, is create the transaction manually, that gives you full
control over when it starts and ends.
Remove #Transactional
Autowire the TransactionManager then in process(..) :
TransactionDefinition txDef = new DefaultTransactionDefinition();
TransactionStatus txStatus = transactionManager.getTransaction(txDef);
try {
//do your stuff here like
doWhateverAsync().then(transactionManager.commit(txStatus);)
} catch (Exception e) {
transactionManager.rollback(txStatus);
throw e;
}
In case of Spring Boot Application , you need following configurations.
The main application method should be annotated with #EnableAsync.
#Async annotation should be on the top of method having #Transactional annotation. This is necessary to indicate processing will be taking place in child thread.

Converting Mono to Pojo without block

Is there a way to convert Mono objects to java Pojo?
I have a web client connecting to 3rd party REST service and instead of returning Mono I have to extract that object and interrogate it.
All the examples I have found return Mono<Pojo> but I have to get the Pojo itself. Currently, I am doing it by calling block() on Pojo but is there a better way to avoid block?
The issue with the block is that after few runs it starts throwing some error like block Terminated with error.
public MyPojo getPojo(){
return myWebClient.get()
.uri(generateUrl())
.headers(createHttpHeaders(headersMap))
.exchange()
.flatMap(evaluateResponseStatus())
.block();
}
private Function<ClientResponse, Mono<? extends MyPojo>> evaluateResponseStatus() {
return response -> {
if (response.statusCode() == HttpStatus.OK) {
return response.bodyToMono(MyPojo.class);
}
if (webClientUtils.isError(response.statusCode())) {
throw myHttpException(response);
// This invokes my exceptionAdvice
// but after few runs its ignored and 500 error is returned.
}
return Mono.empty();
};
}
It's not a good idea to block to operate on value in a reactive stream. Project Reactor offers you a selection of operators for you to handle the objects within a stream.
In your case, you can write getPojo() method like:
public Mono<MyPojo> getPojo() {
return myWebClient.get()
.uri(generateUrl())
.headers(createHttpHeaders(headersMap))
.retrieve()
.onStatus(status -> webClientUtils.isError(status),
response -> Mono.error(myHttpException(response))
.bodyToMono(MyPojo.class)
}
Note that using onStatus method, we replaced the whole evaluateResponseStatus method in your example.
You would use this method like the following:
// some method
...
getPojo()
.map(pojo -> /* do something with the pojo instance */)
...
I strongly advise you to look into Transforming an existing sequence in Project Reactor docs.
Since Webclient.block() is not recommended, the other way to retrieve the values from incoming httpresponse is to create a POJO in the calling application having the required fields. Then once the Mono is received, use Mono.subscribe(), within subscribe add a lambda function, with input say x, to retrieve the individual fields using the x.getters(). The values could be printed on console or assigned to a local var for further processing. This helps in two ways:-
Avoid the dreaded .block()
Keep the call Asynchronous when pulling large volumes of data.
This is one of many other ways to achieve the desired outcome.

Spring integration flow: perform task within flow

hopefully this one is the last question I'm asking on spring integration.
Faced following problem: at the end of pretty long IntegrationFlow dsl sheet there is a code:
return IntegrationFlows.
//...
.enrichHeaders(headerEnricherSpec -> headerEnricherSpec.header("jms_replyTo", responseQueue(), true)) // IntegrationMessageHeaderAccessor.CORRELATION_ID is not acceptable though message came from outgoingGateway of another application with this header been set
.handle(requestRepository::save)
.handle(
Jms.outboundAdapter(queueConnectionFactory()).destination(serverQueue())
)
.get();
The problem is that after some code like requestRepository::save handlers chain becomes broken. This trick only works if there is a gateway passed in as a handler parameter.
How can I overcome this limitation? I think that utilizing wireTap here will not make a deal because it is asynchromous. Here, actually, I save message to store it's jms_replyTo header and replace it with saved one after corresponding reply comes back from server (Smart Proxy enterprise integration pattern).
Any suggestions please?
Not sure why do you say "the last question". Are you going to give up on Spring Integration? :-(
I guess your problem there with the next .handle() because your requestRepository::save is one-way MessageHandler (the void return from the save() method). Or your save() returns null.
The IntegrationFlow is a chain of executions when the next one will be called after the previous with its non-null result.
So, share your requestRepository::save, please!
UPDATE
Neither did help declaring MessageHandler bean as a (m) -> requestRepository.save(m) and passing it into handle(..) method as a param.
Yeah... I would like to see the signature for your requestRepository::save.
So, look. Using method reference for the .handle() you should determine your scenario. If you do one-way handle with the flow stop, there is enough to follow with the org.springframework.messaging.MessageHandler contract. with that you method signature should be like this:
public void myHandle(Message<?> message)
If you'd like to go ahead with the flow you should return anything from your service method. And that result becomes the payload for the next .handle().
In this case your method should follow with the org.springframework.integration.dsl.support.GenericHandler contract. Your method signature may look like:
public Bar myHandle(Foo foo, Map<String, Object> headers)
That's how the method reference works for the .handle().
You should understand how this method chain style works. The output of the previous method is an input of the next. In this our case we protect the flow from the dead code, like MessageHandler with void return, but there is the next flow member. That's why you see This is the end of the integration flow. error.
Finally came up with solution:
#Bean
public MessageHandler requestPersistingHandler() {
return new ServiceActivatingHandler(message -> {
requestRepository.save(message);
return message.getPayload();
});
}
//...
#Bean
public IntegrationFlow requestFlow() {
return IntegrationFlows.from(
Jms.messageDrivenChannelAdapter(queueConnectionFactory()).destination(bookingQueue())
)
.wireTap(controlBusMessageChannel())
.enrichHeaders(headerEnricherSpec -> headerEnricherSpec.header(JMS_REPLY_HEADER, responseQueue(), true))
.handle(requestPersistingHandler())
.handle(
Jms.outboundAdapter(queueConnectionFactory()).destination(serverQueue())
)
.get();
}
I'm just not sure if there is some more straight way.
The only issue left is how to change header in "from-the-server" IntegrationFlow within enrichHeaders method: have no idea how to access existing headers with specification.

Categories