I'm working with RabbitMQ and I'm confused using fanout exchange and convertSendAndReceive (or sendAndReceive) method of RabbitTemplate class.
For example, I have 2 consumers for durable queues QUEUE-01 and QUEUE-02 that are binded to durable fanout exchange FANOUT-01. And 1 publisher to FANOUT-01. I understand what happens when message is being published with convertAndSend (or send) method, message is going to be copied to each queue and will be processed by each consumer. But I'm not sure what will happen if I will call sendAndReceive method? Which consumer will I get a reply from? Is there any specific behaviour? I could not find any documentation on this.
The method sendAndReceive in the RabbitTemplate is used for when you would like to use RPC style messaging. There is an excellent tutorial here.
sendAndReceive() is not appropriate for fanout messaging; it is indeterminate as to which reply will win (the first one, generally). If you want to handle multiple replies and aggregate them you will need to use discrete send and receive calls (or listener container for the replies) and do the aggregation yourself.
Consider using Spring Integration for such situations. It has built-in components for aggregating messages.
Related
I am creating two apache camel (blueprint XML) kafka projects, one is kafka-producer which accepts requests and stores it in kafka server, and other is kafka-consumer which picks ups messages from kafka server and processes them.
This setup is working fine for single topic and single consumer. However how do I create separate consumer groups within same kafka topic? How to route multiple consumer specific messages within same topic inside different consumer groups? Any help is appreciated. Thank you.
Your question is quite general as it's not very clear what's the problem you are trying to solve, therefore it's hard to understand if there's a better way to implement the solution.
Anyway let's start by saying that, as far as I can understand, you are looking for a Selective Consumer (EIP) which is something that's not supported out-of-the-box by Kafka and Consumer API. Selective Consumer can choose what message to pick from the queue or topic based on specific selectors' values that are put in advance by a producer. This feature must be implemented in the message broker as well, but kafka has not such a capability.
Kafka does implement a hybrid solution between pure pub/sub and queue. That being said, what you can do is subscribing to the topic with one or more consumer groups (more on that later) and filter out all messages you're not interested in, by inspecting messages themselves. In the messaging and EIP world, this pattern is known as Array of Filters. As you can imagine this happen after the message has been broadcasted to all subscribers; therefore if that solution does not fit your requirements or context, then you can think of implementing a Content Based Router which is intended to dispatch the message to a subset of consumers only under your centralized control (this would imply intermediate consumer-specific channels that could be other Kafka topics or seda/VM queues, of course).
Moving to the second question, here is the official Kafka Component website: https://camel.apache.org/components/latest/kafka-component.html.
In order to create different consumer groups, you just have to define multiple routes each of them having a dedicated groupId. By adding the groupdId property, you will inform the Consumer Group coordinators (that reside in Kafka brokers) about the existence of multiple separated groups of consumers and brokers will use those in order to discriminate and treat them separately (by sending them a copy of each log message stored in the topic)...
Here is an example:
public void configure() throws Exception {
from("kafka:myTopic?brokers={{kafkaBootstrapServers}}" +
"&groupId=myFirstConsumerGroup"
.log("Message received by myFirstConsumerGroup : ${body}");
from("kafka:myTopic?brokers={{kafkaBootstrapServers}}" +
"&groupId=mySecondConsumerGroup"
.log("Message received by mySecondConsumerGroup : ${body}");
}
As you can see, I created two routes in the same RouteBuilder, not to say in the same Java process. That's a very bad design decision in most of the use cases I can think of, because there's no single responsibility, segregated concerns and they will not scale. But again, it depends on your requirements/context.
Out of completeness, please consider taking a look at all other Kafka Component properties, as there may be many other configurations of your interest such as the number of consumer threads per group.
I tried to stay high level, in order to initiate the discussion... I'll edit my answer in case of new updates from you. Hope I helped!
I'm working on integration via JMS using JmsTemplate from Spring Framework. I want to execute synchronous (i.e. blocking) call to the external system. I've read that
in order to do this I should use CorrelationID. JMS specification says:
A client can use the JMSCorrelationID header field to link one message
with another. A typically use is to link a response message with its
request message.
So it clearly suggests using CorrelationID for request/reply pattern.
I've also discovered that JmsTemplate has sendAndReceive method that was designed to achieve similar goal. sendAndReceive uses internally doSendAndReceive which according to javadoc:
Send a request message to the given Destination and block until a
reply has been received on a temporary queue created on-the-fly.
Now I'm really confused. Does CorrelationID header have something in common with ReplyTo header. Are these two different ways to achieve synchronous call? Or maybe both should be used together? Simple clarification in plain English would be more than welcome.
They are not really related. If you use a temporary reply queue for each request, you don't need a correlationId. If you use distinct request/reply queues then you need something to correlate a reply to its request; hence correlationId.
Spring Integration's outbound gateway supports both methods and handles the correlation for you (the calling thread blocks until the reply is received, regardless of which technique is used).
The system consists of 1 or more clients and a single server. Each client is added manually and provided with an identifier (client.id). Each client can send messages to the server. The server can send messages to each client. All messages can be divided into two groups: with an answer and without.
For example some signatures:
CompletableFuture<Response> call(Request requestToServer)
void execute(Data dataToSend)
where Response, Request, and Data are my POJO.
So, I need some sort of RMI for implementing message communication between the server and clients.
Requirements:
The server must be able to identify a client by its id client.id when processing a message, but the client, before sending that message, should not directly fill this identifier;
Messages should be POJO;
Ability to answer to a message with an exception;
Event-driven handlers (like #RabbitListener) - several handlers - spring bean per incoming message type, with or without return type. A handler should be resolved automatically, based on incoming message type;
Backed by RabbitMQ or ArtemisMQ;
Single service for sending messages from the server to clients: client id should be provided when sending a message. Example: void sendToClient(int clientId, Data dataToClient).
What I've tried to set up this method of communication:
Spring Integration
My own gateway with completable future - great. Also, can enrich message headers with client.id - great. But I didn't find an appropriate way to handle an incoming message and being able to answer it. Tried to publish an ApplicationEvent, but all event handlers have a void return type. So, my idea here is to get correlationId and send back message, providing that correlationId - that doesn't look like a clear solution.
RabbitListener/RabbitTemplate
Cons:
A lot of code to setup RabbitTemplate to send and receive messages;
Need to manually setup request and reply queues and bindings;
problem with resolving client.id inside #RabbitHandler.
AmqpProxyFactoryBean
The closest result to my needs, but several problems, that I cannot solve:
Resolve client.id on message handler;
Single handler per service interface method.
So, I need a lightweight solution to build up communication between services, backed by a message queue. It should be easy to add additional message type - declare the class, add the handler to the consumer and create an object of that class and send it from the producer.
But maybe I'm completely wrong, about services communication? Maybe I should not use message queue for that purpose?
Well using a message queue or message broker like RabbitMQ is a perfectly valid solution. But you have to think whether it is what you actually need.
A message broker allows you to decouple the producers and consumers of messages. It also allows you to introduce a level of fault tolerance. If the consumer is not available temporarily the message is not lost. However, it also means that if a reply is expected the producer of the message might incorrectly think that the other end has processed its request and keep waiting for a reply. Message brokers also often offer some kind of guarantee, like once-and-only-once, or at-least-once, and also policies how to handle undeliverable messages (like dead letter queues), time-to-live, and various routing capabilities (like consistent hash based routing). In your case, you will probably have to route by some header value carrying your client.id if your server-originated messages are to reach one client only.
If you don't care about any of these features and just want a communications fabric, maybe going for something a bit different might make more sense. For example Akka offers actor-based distributed communication.
If what is bothering you is just the cleanliness of the solution, or the amount of boilerplate, maybe having a look at other implementations might help. You might want to have a look at the Reactive Streams API implementation for RabbitMQ, which should be a bit more abstract.
I want to make my app resilient to connection issues that can happen when sending messages to rabbitmq. I want to get hold of all unsent messages, store them and send them latter, when rabbitmq becomes available.
Looking at the official documentation I didn't manage to figure out what is the difference between return callback and retry callback. I understand that retry callback is invoked when retry template exhausts configured policy but don't find this much useful since context doesn't contain message itself.
Based on "replyCode" that is passed in the "returnedMessage" method of ReturnCallback interface one can easily determine further behavior but haven't figured out when this callback is invoked.
One way to go is ConfirmCallback but there is an issue of having additional logic for keeping in sync CorrelationData and messages and statuses of those messages.
So...except ConfirmCallback interface is there any easier way to keep track of messages that are not successfully sent through RabbitMQ using Spring AMQP?
Returns are when the broker returns a message because it's undeliverable (no matching bindings on the exchange to which the message was published, and the mandatory bit is set).
Confirms are when the broker sends an ack back to the publisher, indicating that a message was successfully routed.
Retry is outside the broker realm so it is likely what you need for your use case.
I am trying to implement a custom inbound channel adapter in spring integration to consume messages from apache kafka. Based on spring integration examples, I found that I need to create a class that implements MessageSource interface and implement receive() method that would return consumed Message from kafka. But based on consumer example in kafka, the message iterator in KafkaStream is backed by a BlockingQueue. So if no messages are in the queue, the thread will be blocked.
So what is the best way to implement a receive() method as this method can potentially block until there is something to consume.. ?
In more general sense, How do we implement a custom inbound channel for streaming message sources that blocks until there is something ready to consume..?
The receive() method can block (as long as the underlying operation responds properly to an interrupted thread), and from an inbound-channel-adapter perspective, depending on the expectations of the underlying source, it might be preferable to use a fixed-delay trigger. For example, "long polling" can simulate event-driven behavior when a very small delay value is provided.
We have a similar situation in our JMS polling MessageSource implementation. There, the underlying behavior is handled by one of the JmsTemplate's receive() methods. The JmsTemplate itself allows configuration of a timeout value. That means, as an example, you may choose to block for 5-seconds max but then have a very short delay trigger between each blocking receive call. Alternatively, you can specify an indefinite receive timeout. The decision ultimately depends on the expectations of the underlying resource, message throughput, etc.
Also, I wanted to let you know that we are exploring Kafka adapters ourselves. Perhaps you would like to collaborate on this within the spring-integration-extensions repository?
Regards,
Mark