Can I make a synchronous/blocking call via RabbitMQ using Spring Integration? - java

Imagine I have two Java apps, A and B, using Spring Integration's RabbitMQ support for communication.
Can I make a synchronous/blocking call from A to B? If so, how (roughly)?
Ideally, A has a Spring Integration Gateway which it invokes via e.g. a method called
Object doSomething(Object param)
Then it blocks while the Gateway sends on the message via RabbitMQ to a ServiceActivator on B, and B returns the return value, which eventually becomes the result of the doSomething() method.
It seems this may be possible, but the docs and other Stack Overflow questions don't seem to address this directly.
Many thanks!

Actually that's true. Exactly Gateway pattern implements that requirement.
Your client is blocked to wait the result from that gateway's method, but the underlying Integration flow can be async, paralleled etc.
Spring Integration AMQP provides the <int-amqp:outbound-gateway> for the blocking request/reply scenarios with RabbitMQ.
Of course the other, receiving side should take care of the correlation to send reply to the appropiriate replyToAddress from request message. The simples way to use <int-amqp:inbound-gateway> there.

Related

When returning a Mono from a ReactiveMongo repository does the database call start in the background?

I am new to Spring WebFlux and would like to know what happens when a Mono is returned from a ReactiveMongo repository.
Does the database call begin immediately after it being returned?
For example:
public void serviceMethod() {
var mono = reactiveMongoRepo.findItemById("123");
}
When the Mono is returned does the database call start instantly or do I have to subscrisbe to the Mono to get the start the database call ?
What actually happens under the hood ?
When you declare your reactive code, its just what it is, a declaration. You are basically describing what you want your flow to look like when someone subscribes. This is called assembly time in reactor terminology.
When someone actually subscribes (for instance an external client) we enter what is called subscription time. The subscriber triggers a signal that is propagated up the stream with the purpose to find a publisher.
The subscriber basically asks in reverse up the chain (upstream), "are you a publisher?" the chain might answer "no" then the signal will be propagated again up the chain (upstream) asking "are you a publisher?" until it finds one. For instance, a database call, or a Flux, or a Mono etc. etc.
When it has found a publisher from there on, we’re in execution time. The data will start flowing through the pipeline (in the more natural top-to-bottom order, or upstream to downstream).
You can read more about it in the blog series Flight of the flux written by the reactor developer Simon Baslé which is an excellent read to get to know a little bit more about the inner workings of the reactor library.
No. Nothing happens until you, or more likely when using Spring WebFlux the framework subscribes to the chain.
https://projectreactor.io/docs/core/release/reference/#reactive.subscribe
In Reactor, when you write a Publisher chain, data does not start pumping into it by default. Instead, you create an abstract description of your asynchronous process (which can help with reusability and composition).
By the act of subscribing, you tie the Publisher to a Subscriber, which triggers the flow of data in the whole chain. This is achieved internally by a single request signal from the Subscriber that is propagated upstream, all the way back to the source Publisher.

Using reactive webflux code inside of a #KafkaListener annotated method

I am using spring-kafka to implement a consumer that reads messages from a certain topic. All of these messages are processed by them being exported into another system via a REST API. For that, the code uses the WebClient from the Spring Webflux project, which results in reactive code:
#KafkaListener(topics = "${some.topic}", groupId = "my-group-id")
public void listenToTopic(final ConsumerRecord<String, String> record) {
// minimal, non-reactive code here (logging, serizializing the string)
webClient.get().uri(...).retrieve().bodyToMono(String.class)
// long, reactive chain here
.subscribe();
}
Now I am wondering if this setup is reasonable or if this could cause a lot of issues because the KafkaListener logic from spring-kafka isn't inherently reactive. I wonder if it is necessary to use reactor-kafka instead.
My understanding of the whole reactive world and also the kafka world is very limited, but here is what I am currently assuming the above setup would entail:
The listenToTopic function will almost immediately return, because the bulk of the work is done in a reactive chain, which will not block the function from returning. This means that, from what I understand, the KafkaListener logic will assume that the message is properly processed right there and then, so it will probably acknowledge it and at some point also commit it. If I understand correctly, then that means that the processing of the messages could get out of order. Work could still be done in the previous, reactive chain while the KafkaListener already fetches the next record. This means if the application relies on the messages being fully processed in strict order, then the above setup would be bad. But if it does not, then the above setup would be okay?
Another issue with the above setup is that the application could overload itself with work if there are a lot of messages coming in. Because the listener function returns almost immediately, a large amount of messages could be processing inside of reactive chains at the same time.
The retry-logic that comes built in with the #KafkaListener logic would not really work here, because exceptions inside of the reactive chain would not trigger it. Any retry-logic would have to be handled by the reactive code inside of the listener function itself.
When using reactor-kafka instead of the #KafkaListener annotation, one could change the behaviour described in point 1. Because the listener would now be integrated into the reactive chain, it would be possible to acknowledge a message only when the reactive chain has actually finished. This way, from what I understand, the next message will only be fetched after one message is fully processed via the reactive chain. This would probably solve the issues/behaviour described in point 2-4 as well.
The question: Is my understanding of the situation correct? Are there other issues that could be caused by this setup that I have missed?
Your understanding is correct; either switch to a non-reactive rest client (e.g. RestTemplate) or use reactor-kafka for the consumer.

RabbitMQ Microservices - Parallel processing

I'm working on an application in microservices architecture usingrabbitmq as messaging system.
calls between microservices are asynchronous http requests and each service is subscribed on specific queues
my question is seen that the calls are stateless, how to guarantee the parallelisation of the message commation not by routing-key in rabbitmq queue but by http call itself, that is to say for n call every service must be able to listen to only needed messages .
Sorry for the ambiguity, I'm trying to explain further:
The scenario is that we are in a micro service architecture, due to huge data response the call service will receive the answer in the listener rabbitmq queue.
So let's imagine that two calls are made simultaneously and both queries start loading data into the same queue, the calling service is waiting for messages and adds the received messages but cannot differentiate between the data of caller 1 and caller 2.
Is there a better implementation for the listener
Not sure I understood the question completely, but here is what I can suggest based on the description:
If each service is hooked to a particular listener and you don't want to associate a Routing-Key for the Queue+Listener integration, then can you try having header arguments. [You can use a QueueBuilder.withArguments API to set specific Header values that the Queue is supposed to listen to]
There needs to be a mechanism through which an exchange will bind to a particular queue and consequently to a Listener service.
Publisher -> Exchange ---> (with headers) binds to Queue -> Listener

Handling JMS-acknowledgements in Camel

We are implementing a distributed system based (among others) upon JMS and REST calls. Currently we are looking at two components A and B. Component A reads from an a ActiveMQ-Queue (via Camel-from), processes the message and sends it on to B via REST (this is done via Camel .to/.inOnly). B processes the message further.
In A this looks roughly like this:
from(activeMqInQueue)
.process(/* someBean */)
.inOnly(/* URI of B */)
.end();
Some time later, B will make an async call (decoupled by a seda queue) back to A. From the perspective of Camel, both calls have nothing to do with each other but from our point, it would be important to acknowledge the message, once we get an answer from B. Obviously, we have some form of handler, which can relate the outgoing and incoming call but what we are lacking is the possibility to explicitly acknowledge the original message.
How is this done or what pattern better suits our needs?

Send and receive on RabbitMQ fanout exchange via Spring

I'm working with RabbitMQ and I'm confused using fanout exchange and convertSendAndReceive (or sendAndReceive) method of RabbitTemplate class.
For example, I have 2 consumers for durable queues QUEUE-01 and QUEUE-02 that are binded to durable fanout exchange FANOUT-01. And 1 publisher to FANOUT-01. I understand what happens when message is being published with convertAndSend (or send) method, message is going to be copied to each queue and will be processed by each consumer. But I'm not sure what will happen if I will call sendAndReceive method? Which consumer will I get a reply from? Is there any specific behaviour? I could not find any documentation on this.
The method sendAndReceive in the RabbitTemplate is used for when you would like to use RPC style messaging. There is an excellent tutorial here.
sendAndReceive() is not appropriate for fanout messaging; it is indeterminate as to which reply will win (the first one, generally). If you want to handle multiple replies and aggregate them you will need to use discrete send and receive calls (or listener container for the replies) and do the aggregation yourself.
Consider using Spring Integration for such situations. It has built-in components for aggregating messages.

Categories