How to correctly implement delay in Spring Integration Flow - java

I am trying to implement a delay in a Spring Integration Flow.
I have one flow that is starting a process on another server and then I am checking after a delay if that process is completed or not.
When completed the flow should move to the next phase.
This seems to work it also shows in logs (and, clearly, in the flow itself), a long list of repetitions in the runexampleScriptWaiting channel.
I tried removing that channel change but then the flow gets stuck in that phase forever, never moving to completion.
How can I implement this so that a single runexampleScriptWaiting is shown / executed (something similar to a non-blocking while loop, I guess)?
I considered keeping it as is and just update my monitoring application (a very small frontend that shows which channels are in the payload's history) in order to get rid of duplicated channel lines but I also wondered if there is a better / more robust way to do this.
Here's a simplified example:
#Bean
public IntegrationFlow exampleIntegrationFlow() {
return IntegrationFlows
.from(exampleConfig.runexampleScript.get())
.<ExamplePayload>handle((payload, messageHeaders) -> examplePayloadService
.changeExampleServiceRequestStatus(payload, ExampleServiceStatus.STARTED))
.<ExamplePayload>handle(
(payload, messageHeaders) -> exampleScriptService.runexample(payload))
.channel(exampleConfig.runexampleScriptWaiting.get())
.<ExamplePayload, Boolean>route(jobStatusService::areJobsFinished,
router -> router
.subFlowMapping(true, exampleSuccessSubflow())
.subFlowMapping(false, exampleWaitSubflow())
.defaultOutputToParentFlow()
)
.get();
}
#Bean
public IntegrationFlow exampleWaitSubflow() {
return IntegrationFlows
.from(exampleConfig.runexampleScriptWaiting.get())
.<ExamplePayload>handle(
(payload, messageHeaders) -> {
interruptIgnoringSleep(1000);
return payload;
})
.channel(exampleConfig.runexampleScriptWaiting.get()) // Commenting this gets the process stuck
.get();
}

It is not clear what is your exampleConfig.runexampleScriptWaiting.get(), but what you have so far in the config is not OK. You have two subscribers to the same channel:
.channel(exampleConfig.runexampleScriptWaiting.get()) and the next route()
.from(exampleConfig.runexampleScriptWaiting.get()) and the next handle()
This may cause unexpected behavior, e.g. round-robin messages distribution.
I would do filter() and delay() instead in addition to an ExecutorChannel since you are asking about non-blocking retry:
.channel(exampleConfig.runexampleScriptWaiting.get())
.filter(jobStatusService::areJobsFinished,
filter -> filter.discardFlow(
discardFlow -> discardFlow
.delay(1000)
.channel(exampleConfig.runexampleScriptWaiting.get())))
The exampleSuccessSubflow could go just after this filter() as part of this flow or via to(exampleSuccessSubflow()).
Pay attention to that discardFlow: we delay non-finished message a little bit and produce it back to that runexampleScriptWaiting channel for calling this filter again. If you make this channel as an ExecutorChannel (or QueueChannel), your wait functionality is going to be non-blocking. But at the same time your main flow is still going to be blocked for this request since you continue waiting for reply. Therefore it might not make too much sense to make this filtering logic as non-blocking and you can still use that Thread.sleep() instead of delay().
The router solution also may work, but you cannot use that runexampleScriptWaiting channel as an input of that sub-flow. Probably that's the reason behind that your problem with "process stuck".

Related

Spring Webflux endpoint working as a topic

I have an Flux endpoint that I provide to clients (subscribers) to receive updated prices. I'm testing it accessing the URL (http://localhost:8080/prices) though the browser and it works fine. The problem I'm facing (I'm maybe missing some concepts here) is when I open this URL in many browsers and I expect to receive the notification in all of them, but just one receives. It is working as a queue instead of a topic (like in message Brokers). Is that correct behavior?
#GetMapping(value = "prices", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<ServerSentEvent<Collection<Price>>> prices() {
return Flux.interval(Duration.ofSeconds(5))
.map(sec -> pricesQueue.get())
.filter(prices -> !prices.isEmpty())
.map(prices -> ServerSentEvent.<Collection<Price>> builder()
.event("status-changed")
.data(prices)
.build());
}
get isn't a standard queue operation, but this is almost certainly because your pricesQueue.get() method isn't idempotent. With every request (with every browser window you open in this case), you'll get a new flux that calls pricesQueue.get() every 5 seconds. Now if pricesQueue.get() just retrieves the latest item in the queue and does nothing with it, all is good - all your subscribers receive the same item, and the same item is displayed. But if it acts more like a poll() where it removes the item in the queue after it's retrieved it, then only the first flux will get that value - the rest won't, as by that point it will have been removed.
You've really two main options here:
Change your get() implementation (or implement a new method) so that it doesn't mutate the queue, only retrieves a value.
Turn the flux into a hot flux. Store Flux.interval(Duration.ofSeconds(5)).map(sec -> pricesQueue.get()).publish().autoConnect() somewhere as a field (let's say as queueFlux), then just return queueFlux.filter(prices -> !prices.isEmpty()).map(...) in your controller method.

How to detect recovery in a retrying Flux?

I have successfully setup a Flux receiving events from a remote system (the protocol is websocket but that's irrelevant for the question) and handling connection glitches gracefully using retryBackoff method. The code (simplified) is something like this:
Flux flux = myEventFlux
.retryBackoff( Long.MAX_VALUE, Duration.ofSeconds(5) )
.publish()
.refCount();
flux.subscribe( System.out::println );
Now I'd like to handle connection lost and connection recovery events in order to show some cues in the UI, or at least register some logs. Detecting errors seems easy, just a doOnError before retryBackoff does the trick. But recovery is another story... I need something like "first successful event after an error", so I've tryed this:
flux.next().subscribe( event -> System.out.println("first = " + event) );
It works in the first normal connection (no previous error) but not in subsequent reconnections after errors.
The difficulty with what you want to achieve is that there is no way to distinguish two legitimate end-of-line subscribers subscribing to the retrying Flux vs one subscriber that triggers two attempts, from the perspective of myEventFlux.
So you could use doOnSubscribe (or doFirst since 3.2.10.RELEASE), but it would be subject to the limitation above. It would also trigger on the original connection, not just the retries...
Maybe for the UI use case this would still help?

Invoking non-blocking operations sequentially while consuming from a Flux including retries

So my use-case is to consume messages from Kafka in a Spring Webflux application while programming in the reactive style using Project Reactor, and to perform a non-blocking operation for each message in the same order as the messages were received from Kafka. The system should also be able to recover on its own.
Here is the code snippet that is setup to consume from :
Flux<ReceiverRecord<Integer, DataDocument>> messages = Flux.defer(() -> {
KafkaReceiver<Integer, DataDocument> receiver = KafkaReceiver.create(options);
return receiver.receive();
});
messages.map(this::transformToOutputFormat)
.map(this::performAction)
.flatMapSequential(receiverRecordMono -> receiverRecordMono)
.doOnNext(record -> record.receiverOffset().acknowledge())
.doOnError(error -> logger.error("Error receiving record", error))
.retryBackoff(100, Duration.ofSeconds(5), Duration.ofMinutes(5))
.subscribe();
As you can see, what I do is: take the message from Kafka, transform it into an object intended for a new destination, then send it to the destination, and then acknowledge the offset to mark the message as consumed and processed. It is critical to acknowledge the offset in the same order as the messages being consumed from Kafka so that we don't move the offset beyond messages that were not fully processed (including sending some data to the destination). Hence I'm using a flatMapSequential to ensure this.
For simplicity let's assume the transformToOutputFormat() method is an identity transform.
public ReceiverRecord<Integer, DataDocument> transformToOutputFormat(ReceiverRecord<Integer, DataDocument> record) {
return record;
}
The performAction() method needs to do something over the network, say call an HTTP REST API. So the appropriate APIs return a Mono, which means the chain needs to be subscribed to. Also, I need the ReceiverRecord to be returned by this method so that the offset can be acknowledged in the flatMapSequential() operator above. Because I need the Mono subscribed to, I'm using flatMapSequential above. If not, I could have used a map instead.
public Mono<ReceiverRecord<Integer, DataDocument>> performAction(ReceiverRecord<Integer, DataDocument> record) {
return Mono.just(record)
.flatMap(receiverRecord ->
HttpClient.create()
.port(3000)
.get()
.uri("/makeCall?data=" + receiverRecord.value().getData())
.responseContent()
.aggregate()
.asString()
)
.retryBackoff(100, Duration.ofSeconds(5), Duration.ofMinutes(5))
.then(Mono.just(record));
I have two conflicting needs in this method:
1. Subscribe to the chain that makes the HTTP call
2. Return the ReceiverRecord
Using a flatMap() means my return type changes to a Mono. Using doOnNext() in the same place would retain the ReceiverRecord in the chain, but would not allow the HttpClient response to be subscribed to automatically.
I can't add .subscribe() after asString(), because I want to wait till the HTTP response is completely received before the offset is acknowledged.
I can't use .block() either since it runs on a parallel thread.
As a result, I need to cheat and return the record object from the method scope.
The other thing is that on a retry inside performAction it switches threads. Since flatMapSequential() eagerly subscribes to each Mono in the outer flux, this means that while acknowledgement of offsets can be guaranteed in order, we can't guarantee that the HTTP call in performAction will be performed in the same order.
So I have two questions.
Is it possible to return record in a natural way rather than returning the method scope object?
Is it possible to ensure that both the HTTP call as well as the offset acknowledgement are performed in the same order as the messages for which these operations are occurring?
Here is the solution I have come up with.
Flux<ReceiverRecord<Integer, DataDocument>> messages = Flux.defer(() -> {
KafkaReceiver<Integer, DataDocument> receiver = KafkaReceiver.create(options);
return receiver.receive();
});
messages.map(this::transformToOutputFormat)
.delayUntil(this::performAction)
.doOnNext(record -> record.receiverOffset().acknowledge())
.doOnError(error -> logger.error("Error receiving record", error))
.retryBackoff(100, Duration.ofSeconds(5), Duration.ofMinutes(5))
.subscribe();
Instead of using flatMapSequential to subscribe to the performAction Mono and preserve sequence, what I've done instead is delayed the request for more messages from the Kafka receiver until the action is performed. This enables the one-at-a-time processing that I need.
As a result, performAction doesn't need to return a Mono of ReceiverRecord. I also simplified it to the following:
public Mono<String> performAction(ReceiverRecord<Integer, DataDocument> record) {
HttpClient.create()
.port(3000)
.get()
.uri("/makeCall?data=" + receiverRecord.value().getData())
.responseContent()
.aggregate()
.asString()
.retryBackoff(100, Duration.ofSeconds(5), Duration.ofMinutes(5));
}

Http Websocket as Akka Stream Source

I'd like to listen on a websocket using akka streams. That is, I'd like to treat it as nothing but a Source.
However, all official examples treat the websocket connection as a Flow.
My current approach is using the websocketClientFlow in combination with a Source.maybe. This eventually results in the upstream failing due to a TcpIdleTimeoutException, when there are no new Messages being sent down the stream.
Therefore, my question is twofold:
Is there a way – which I obviously missed – to treat a websocket as just a Source?
If using the Flow is the only option, how does one handle the TcpIdleTimeoutException properly? The exception can not be handled by providing a stream supervision strategy. Restarting the source by using a RestartSource doesn't help either, because the source is not the problem.
Update
So I tried two different approaches, setting the idle timeout to 1 second for convenience
application.conf
akka.http.client.idle-timeout = 1s
Using keepAlive (as suggested by Stefano)
Source.<Message>maybe()
.keepAlive(Duration.apply(1, "second"), () -> (Message) TextMessage.create("keepalive"))
.viaMat(Http.get(system).webSocketClientFlow(WebSocketRequest.create(websocketUri)), Keep.right())
{ ... }
When doing this, the Upstream still fails with a TcpIdleTimeoutException.
Using RestartFlow
However, I found out about this approach, using a RestartFlow:
final Flow<Message, Message, NotUsed> restartWebsocketFlow = RestartFlow.withBackoff(
Duration.apply(3, TimeUnit.SECONDS),
Duration.apply(30, TimeUnit.SECONDS),
0.2,
() -> createWebsocketFlow(system, websocketUri)
);
Source.<Message>maybe()
.viaMat(restartWebsocketFlow, Keep.right()) // One can treat this part of the resulting graph as a `Source<Message, NotUsed>`
{ ... }
(...)
private Flow<Message, Message, CompletionStage<WebSocketUpgradeResponse>> createWebsocketFlow(final ActorSystem system, final String websocketUri) {
return Http.get(system).webSocketClientFlow(WebSocketRequest.create(websocketUri));
}
This works in that I can treat the websocket as a Source (although artifically, as explained by Stefano) and keep the tcp connection alive by restarting the websocketClientFlow whenever an Exception occurs.
This doesn't feel like the optimal solution though.
No. WebSocket is a bidirectional channel, and Akka-HTTP therefore models it as a Flow. If in your specific case you care only about one side of the channel, it's up to you to form a Flow with a "muted" side, by using either Flow.fromSinkAndSource(Sink.ignore, mySource) or Flow.fromSinkAndSource(mySink, Source.maybe), depending on the case.
as per the documentation:
Inactive WebSocket connections will be dropped according to the
idle-timeout settings. In case you need to keep inactive connections
alive, you can either tweak your idle-timeout or inject ‘keep-alive’
messages regularly.
There is an ad-hoc combinator to inject keep-alive messages, see the example below and this Akka cookbook recipe. NB: this should happen on the client side.
src.keepAlive(1.second, () => TextMessage.Strict("ping"))
I hope I understand your question correctly. Are you looking for asSourceOf?
path("measurements") {
entity(asSourceOf[Measurement]) { measurements =>
// measurement has type Source[Measurement, NotUsed]
...
}
}

Project Reactor async send email with retry on error

I need to send some data after user registered. I want to do first attempt in main thread, but if there are any errors, I want to retry 5 times with 10 minutes interval.
#Override
public void sendRegisterInfo(MailData data) {
Mono.just(data)
.doOnNext(this::send)
.doOnError(ex -> logger.warn("Main queue {}", ex.getMessage()))
.doOnSuccess(d -> logger.info("Send mail to {}", d.getRecipient()))
.onErrorResume(ex -> retryQueue(data))
.subscribe();
}
private Mono<MailData> retryQueue(MailData data) {
return Mono.just(data)
.delayElement(Duration.of(10, ChronoUnit.MINUTES))
.doOnNext(this::send)
.doOnError(ex -> logger.warn("Retry queue {}", ex.getMessage()))
.doOnSuccess(d -> logger.info("Send mail to {}", d.getRecipient()))
.retry(5)
.subscribe();
}
It works.
But I've got some questions:
Did I correct to make operation in doOnNext function?
Is it correct to use delayElement to make a delay between executions?
Did the thread blocked when waiting for delay?
And what the best practice to make a retries on error and make a delay between it?
doOnXXX for logging is fine. But for the actual element processing, you must prefer using flatMap rather than doOnNext (assuming your processing is asynchronous / can be converted to returning a Flux/Mono).
This is correct. Another way is to turn the code around and start from a Flux.interval, but here delayElement is better IMO.
The delay runs on a separate thread/scheduler (by default, Schedulers.parallel()), so not blocking the main thread.
There's actually a Retry builder dedicated to that kind of use case in the reactor-extra addon: https://github.com/reactor/reactor-addons/blob/master/reactor-extra/src/main/java/reactor/retry/Retry.java

Categories