i'm currently trying to poll multiple endpoints (which are different)
the problem is i want to keep polling only the endpoints which didn't return the status i need in an aggregated manner so the flow is basically :
build the requests -> merge them to one stream -> poll for response -> check is status matches :
if doesn't wait and redo the flow
if does take the observer out of the stream
this is what i have written and it feels like i'm missing something
Observable.merge(buildRequests())
.repeatWhen(obs -> obs.delay(5000, TimeUnit.MILLISECONDS))
.takeUntil(response -> CheckShouldRepeat(response)).subscribe(whatever());
thanks a bunch!
Observable.fromCallable(() -> buildRequests())
.repeatWhen(o -> CheckShouldRepeat(v -> Observable.timer(5000, TimeUnit.MILLISECONDS)));
This can help.
Related
I am trying to implement a delay in a Spring Integration Flow.
I have one flow that is starting a process on another server and then I am checking after a delay if that process is completed or not.
When completed the flow should move to the next phase.
This seems to work it also shows in logs (and, clearly, in the flow itself), a long list of repetitions in the runexampleScriptWaiting channel.
I tried removing that channel change but then the flow gets stuck in that phase forever, never moving to completion.
How can I implement this so that a single runexampleScriptWaiting is shown / executed (something similar to a non-blocking while loop, I guess)?
I considered keeping it as is and just update my monitoring application (a very small frontend that shows which channels are in the payload's history) in order to get rid of duplicated channel lines but I also wondered if there is a better / more robust way to do this.
Here's a simplified example:
#Bean
public IntegrationFlow exampleIntegrationFlow() {
return IntegrationFlows
.from(exampleConfig.runexampleScript.get())
.<ExamplePayload>handle((payload, messageHeaders) -> examplePayloadService
.changeExampleServiceRequestStatus(payload, ExampleServiceStatus.STARTED))
.<ExamplePayload>handle(
(payload, messageHeaders) -> exampleScriptService.runexample(payload))
.channel(exampleConfig.runexampleScriptWaiting.get())
.<ExamplePayload, Boolean>route(jobStatusService::areJobsFinished,
router -> router
.subFlowMapping(true, exampleSuccessSubflow())
.subFlowMapping(false, exampleWaitSubflow())
.defaultOutputToParentFlow()
)
.get();
}
#Bean
public IntegrationFlow exampleWaitSubflow() {
return IntegrationFlows
.from(exampleConfig.runexampleScriptWaiting.get())
.<ExamplePayload>handle(
(payload, messageHeaders) -> {
interruptIgnoringSleep(1000);
return payload;
})
.channel(exampleConfig.runexampleScriptWaiting.get()) // Commenting this gets the process stuck
.get();
}
It is not clear what is your exampleConfig.runexampleScriptWaiting.get(), but what you have so far in the config is not OK. You have two subscribers to the same channel:
.channel(exampleConfig.runexampleScriptWaiting.get()) and the next route()
.from(exampleConfig.runexampleScriptWaiting.get()) and the next handle()
This may cause unexpected behavior, e.g. round-robin messages distribution.
I would do filter() and delay() instead in addition to an ExecutorChannel since you are asking about non-blocking retry:
.channel(exampleConfig.runexampleScriptWaiting.get())
.filter(jobStatusService::areJobsFinished,
filter -> filter.discardFlow(
discardFlow -> discardFlow
.delay(1000)
.channel(exampleConfig.runexampleScriptWaiting.get())))
The exampleSuccessSubflow could go just after this filter() as part of this flow or via to(exampleSuccessSubflow()).
Pay attention to that discardFlow: we delay non-finished message a little bit and produce it back to that runexampleScriptWaiting channel for calling this filter again. If you make this channel as an ExecutorChannel (or QueueChannel), your wait functionality is going to be non-blocking. But at the same time your main flow is still going to be blocked for this request since you continue waiting for reply. Therefore it might not make too much sense to make this filtering logic as non-blocking and you can still use that Thread.sleep() instead of delay().
The router solution also may work, but you cannot use that runexampleScriptWaiting channel as an input of that sub-flow. Probably that's the reason behind that your problem with "process stuck".
I'm quite new to Mono and Flux. I'm trying to join several downstream API responses. It's a traditional blocking application. I don't wish to collect a list of Mono, I want a List of the payloads returned from the downstream APIs, which I fetch from the Mono. However the 'result' being returned to the controller at times only has some or none of the downstream API responses. What is the correct way to do this? I've read several posts How to iterate Flux and mix with Mono states
you should not call subscribe anywhere in a web application. If this is bound to an HTTP request, you're basically triggering the
reactive pipeline with no guarantee about resources or completion.
Calling subscribe triggers the pipeline but does not wait until it's
complete
Should I be using CompletableFuture?
In my Service I attempted
var result = new ArrayList<List<>>();
List<Mono<X>> monoList = apiCall();
Flux.fromIterable(monoList)
.flatMap(m -> m.doOnSuccess(
x -> {
result.add(x.getData());
}
)).subscribe();
I also attempted the following in controller, but the method returns without waiting for subscribe to complete
var result = new ArrayList<List<X>>();
Flux.concat(
this.service.callApis(result, ...)
).subscribe();
return result;
In my service
public Mono<Void> callApis(List<List<x>> result, ..) {
...
return Flux.fromIterable(monoList)
.flatMap(m -> m.doOnSuccess(
x -> {
result.add(x.getData()...);
}
)).then();
The Project Reactor documentation (which is very good) has a section called Which operator do I need?. You need to create a Flux from your API calls, combine the results, and then return to the synchronous world.
In your case, it looks like all your downstream services have the same API, so they all return the same type and it doesn't really matter what order those responses appear in your application. Also, I'm assuming that apiCall() returns a List<Mono<Response>>. You probably want something like
Flux.fromIterable(apiCall()) // Flux<Mono<Response>>
.flatMap(mono -> mono) // Flux<Response>
.map(response -> response.getData()) // Flux<List<X>>
.collectList() // Mono<List<List<X>>>
.block(); // List<List<X>>
The fromIterable(...).flatMap(x->x) construct just converts your List<Mono<R>> into a Flux<R>.
map() is used to extract the data part of your response.
collectList() creates a Mono that waits until the Flux completes, and gives a single result containing all the data lists.
block() subscribes to the Mono returned by the previous operator, and blocks until it is complete, which will (in this case) be when all the Monos returned by apiCall() have completed.
There are many possible alternatives here, and which is most suitable will depend on your exact use case.
So my use-case is to consume messages from Kafka in a Spring Webflux application while programming in the reactive style using Project Reactor, and to perform a non-blocking operation for each message in the same order as the messages were received from Kafka. The system should also be able to recover on its own.
Here is the code snippet that is setup to consume from :
Flux<ReceiverRecord<Integer, DataDocument>> messages = Flux.defer(() -> {
KafkaReceiver<Integer, DataDocument> receiver = KafkaReceiver.create(options);
return receiver.receive();
});
messages.map(this::transformToOutputFormat)
.map(this::performAction)
.flatMapSequential(receiverRecordMono -> receiverRecordMono)
.doOnNext(record -> record.receiverOffset().acknowledge())
.doOnError(error -> logger.error("Error receiving record", error))
.retryBackoff(100, Duration.ofSeconds(5), Duration.ofMinutes(5))
.subscribe();
As you can see, what I do is: take the message from Kafka, transform it into an object intended for a new destination, then send it to the destination, and then acknowledge the offset to mark the message as consumed and processed. It is critical to acknowledge the offset in the same order as the messages being consumed from Kafka so that we don't move the offset beyond messages that were not fully processed (including sending some data to the destination). Hence I'm using a flatMapSequential to ensure this.
For simplicity let's assume the transformToOutputFormat() method is an identity transform.
public ReceiverRecord<Integer, DataDocument> transformToOutputFormat(ReceiverRecord<Integer, DataDocument> record) {
return record;
}
The performAction() method needs to do something over the network, say call an HTTP REST API. So the appropriate APIs return a Mono, which means the chain needs to be subscribed to. Also, I need the ReceiverRecord to be returned by this method so that the offset can be acknowledged in the flatMapSequential() operator above. Because I need the Mono subscribed to, I'm using flatMapSequential above. If not, I could have used a map instead.
public Mono<ReceiverRecord<Integer, DataDocument>> performAction(ReceiverRecord<Integer, DataDocument> record) {
return Mono.just(record)
.flatMap(receiverRecord ->
HttpClient.create()
.port(3000)
.get()
.uri("/makeCall?data=" + receiverRecord.value().getData())
.responseContent()
.aggregate()
.asString()
)
.retryBackoff(100, Duration.ofSeconds(5), Duration.ofMinutes(5))
.then(Mono.just(record));
I have two conflicting needs in this method:
1. Subscribe to the chain that makes the HTTP call
2. Return the ReceiverRecord
Using a flatMap() means my return type changes to a Mono. Using doOnNext() in the same place would retain the ReceiverRecord in the chain, but would not allow the HttpClient response to be subscribed to automatically.
I can't add .subscribe() after asString(), because I want to wait till the HTTP response is completely received before the offset is acknowledged.
I can't use .block() either since it runs on a parallel thread.
As a result, I need to cheat and return the record object from the method scope.
The other thing is that on a retry inside performAction it switches threads. Since flatMapSequential() eagerly subscribes to each Mono in the outer flux, this means that while acknowledgement of offsets can be guaranteed in order, we can't guarantee that the HTTP call in performAction will be performed in the same order.
So I have two questions.
Is it possible to return record in a natural way rather than returning the method scope object?
Is it possible to ensure that both the HTTP call as well as the offset acknowledgement are performed in the same order as the messages for which these operations are occurring?
Here is the solution I have come up with.
Flux<ReceiverRecord<Integer, DataDocument>> messages = Flux.defer(() -> {
KafkaReceiver<Integer, DataDocument> receiver = KafkaReceiver.create(options);
return receiver.receive();
});
messages.map(this::transformToOutputFormat)
.delayUntil(this::performAction)
.doOnNext(record -> record.receiverOffset().acknowledge())
.doOnError(error -> logger.error("Error receiving record", error))
.retryBackoff(100, Duration.ofSeconds(5), Duration.ofMinutes(5))
.subscribe();
Instead of using flatMapSequential to subscribe to the performAction Mono and preserve sequence, what I've done instead is delayed the request for more messages from the Kafka receiver until the action is performed. This enables the one-at-a-time processing that I need.
As a result, performAction doesn't need to return a Mono of ReceiverRecord. I also simplified it to the following:
public Mono<String> performAction(ReceiverRecord<Integer, DataDocument> record) {
HttpClient.create()
.port(3000)
.get()
.uri("/makeCall?data=" + receiverRecord.value().getData())
.responseContent()
.aggregate()
.asString()
.retryBackoff(100, Duration.ofSeconds(5), Duration.ofMinutes(5));
}
I need to send some data after user registered. I want to do first attempt in main thread, but if there are any errors, I want to retry 5 times with 10 minutes interval.
#Override
public void sendRegisterInfo(MailData data) {
Mono.just(data)
.doOnNext(this::send)
.doOnError(ex -> logger.warn("Main queue {}", ex.getMessage()))
.doOnSuccess(d -> logger.info("Send mail to {}", d.getRecipient()))
.onErrorResume(ex -> retryQueue(data))
.subscribe();
}
private Mono<MailData> retryQueue(MailData data) {
return Mono.just(data)
.delayElement(Duration.of(10, ChronoUnit.MINUTES))
.doOnNext(this::send)
.doOnError(ex -> logger.warn("Retry queue {}", ex.getMessage()))
.doOnSuccess(d -> logger.info("Send mail to {}", d.getRecipient()))
.retry(5)
.subscribe();
}
It works.
But I've got some questions:
Did I correct to make operation in doOnNext function?
Is it correct to use delayElement to make a delay between executions?
Did the thread blocked when waiting for delay?
And what the best practice to make a retries on error and make a delay between it?
doOnXXX for logging is fine. But for the actual element processing, you must prefer using flatMap rather than doOnNext (assuming your processing is asynchronous / can be converted to returning a Flux/Mono).
This is correct. Another way is to turn the code around and start from a Flux.interval, but here delayElement is better IMO.
The delay runs on a separate thread/scheduler (by default, Schedulers.parallel()), so not blocking the main thread.
There's actually a Retry builder dedicated to that kind of use case in the reactor-extra addon: https://github.com/reactor/reactor-addons/blob/master/reactor-extra/src/main/java/reactor/retry/Retry.java
I´m using retryWhen when a external http request to one of my external services fails.
The problem is that I´m using
RxHelper.toObservable(httpClient.request(method, url))
To get my observable response, and becuase vertx internally use ReadStreamAdapter I cannot use the retryWhen because it´s complain
java java.lang.IllegalStateException: Request already complete
Here a code example:
RxHelper.toObservable(httpClient.request(method, url))
.retryWhen(new ServiceExceptionRetry())
.subscribe(f -> replySuccess(eventMsg, event, f), t -> handleError(t, eventMsg, event));
Any idea how to achieve this?
You can use defer to create an Observable from method and client every time like this:
Observable.defer(() -> RxHelper.toObservable(httpClient.request(method, url)))
.retryWhen(new ServiceExceptionRetry())
.subscribe(f -> replySuccess(eventMsg, event, f), t -> handleError(t, eventMsg, event));