The system consists of 1 or more clients and a single server. Each client is added manually and provided with an identifier (client.id). Each client can send messages to the server. The server can send messages to each client. All messages can be divided into two groups: with an answer and without.
For example some signatures:
CompletableFuture<Response> call(Request requestToServer)
void execute(Data dataToSend)
where Response, Request, and Data are my POJO.
So, I need some sort of RMI for implementing message communication between the server and clients.
Requirements:
The server must be able to identify a client by its id client.id when processing a message, but the client, before sending that message, should not directly fill this identifier;
Messages should be POJO;
Ability to answer to a message with an exception;
Event-driven handlers (like #RabbitListener) - several handlers - spring bean per incoming message type, with or without return type. A handler should be resolved automatically, based on incoming message type;
Backed by RabbitMQ or ArtemisMQ;
Single service for sending messages from the server to clients: client id should be provided when sending a message. Example: void sendToClient(int clientId, Data dataToClient).
What I've tried to set up this method of communication:
Spring Integration
My own gateway with completable future - great. Also, can enrich message headers with client.id - great. But I didn't find an appropriate way to handle an incoming message and being able to answer it. Tried to publish an ApplicationEvent, but all event handlers have a void return type. So, my idea here is to get correlationId and send back message, providing that correlationId - that doesn't look like a clear solution.
RabbitListener/RabbitTemplate
Cons:
A lot of code to setup RabbitTemplate to send and receive messages;
Need to manually setup request and reply queues and bindings;
problem with resolving client.id inside #RabbitHandler.
AmqpProxyFactoryBean
The closest result to my needs, but several problems, that I cannot solve:
Resolve client.id on message handler;
Single handler per service interface method.
So, I need a lightweight solution to build up communication between services, backed by a message queue. It should be easy to add additional message type - declare the class, add the handler to the consumer and create an object of that class and send it from the producer.
But maybe I'm completely wrong, about services communication? Maybe I should not use message queue for that purpose?
Well using a message queue or message broker like RabbitMQ is a perfectly valid solution. But you have to think whether it is what you actually need.
A message broker allows you to decouple the producers and consumers of messages. It also allows you to introduce a level of fault tolerance. If the consumer is not available temporarily the message is not lost. However, it also means that if a reply is expected the producer of the message might incorrectly think that the other end has processed its request and keep waiting for a reply. Message brokers also often offer some kind of guarantee, like once-and-only-once, or at-least-once, and also policies how to handle undeliverable messages (like dead letter queues), time-to-live, and various routing capabilities (like consistent hash based routing). In your case, you will probably have to route by some header value carrying your client.id if your server-originated messages are to reach one client only.
If you don't care about any of these features and just want a communications fabric, maybe going for something a bit different might make more sense. For example Akka offers actor-based distributed communication.
If what is bothering you is just the cleanliness of the solution, or the amount of boilerplate, maybe having a look at other implementations might help. You might want to have a look at the Reactive Streams API implementation for RabbitMQ, which should be a bit more abstract.
Related
I'm working on integration via JMS using JmsTemplate from Spring Framework. I want to execute synchronous (i.e. blocking) call to the external system. I've read that
in order to do this I should use CorrelationID. JMS specification says:
A client can use the JMSCorrelationID header field to link one message
with another. A typically use is to link a response message with its
request message.
So it clearly suggests using CorrelationID for request/reply pattern.
I've also discovered that JmsTemplate has sendAndReceive method that was designed to achieve similar goal. sendAndReceive uses internally doSendAndReceive which according to javadoc:
Send a request message to the given Destination and block until a
reply has been received on a temporary queue created on-the-fly.
Now I'm really confused. Does CorrelationID header have something in common with ReplyTo header. Are these two different ways to achieve synchronous call? Or maybe both should be used together? Simple clarification in plain English would be more than welcome.
They are not really related. If you use a temporary reply queue for each request, you don't need a correlationId. If you use distinct request/reply queues then you need something to correlate a reply to its request; hence correlationId.
Spring Integration's outbound gateway supports both methods and handles the correlation for you (the calling thread blocks until the reply is received, regardless of which technique is used).
I am trying to understand the best use of RabbitMQ to satisfy the following problem.
As context I'm not concerned with performance in this use case (my peak TPS for this flow is 2 TPS) but I am concerned about resilience.
I have RabbitMQ installed in a cluster and ignoring dead letter queues the basic flow is I have a service receive a request, creates a persistent message which it queues, in a transaction, to a durable queue (at this point I'm happy the request is secured to disk). I then have another process listening for a message, which it reads (not using auto ack), does a bunch of stuff, writes a new message to a different exchange queue in a transaction (again now happy this message is secured to disk). Assuming the transaction completes successfully it manually acks the message back to the original consumer.
At this point my only failure scenario is is I have a failure between the commit of the transaction to write to my second queue and the return of the ack. This will lead to a message being potentially processed twice. Is there anything else I can do to plug this gap or do I have to figure out a way of handling duplicate messages.
As a final bit of context the services are written in java so using the java client libs.
Paul Fitz.
First of all, I suggest you to look a this guide here which has a lot of valid information on your topic.
From the RabbitMQ guide:
At the Producer
When using confirms, producers recovering from a channel or connection
failure should retransmit any messages for which an acknowledgement
has not been received from the broker. There is a possibility of
message duplication here, because the broker might have sent a
confirmation that never reached the producer (due to network failures,
etc). Therefore consumer applications will need to perform
deduplication or handle incoming messages in an idempotent manner.
At the Consumer
In the event of network failure (or a node crashing), messages can be
duplicated, and consumers must be prepared to handle them. If
possible, the simplest way to handle this is to ensure that your
consumers handle messages in an idempotent way rather than explicitly
deal with deduplication.
So, the point is that is not possibile in any way at all to guarantee that this "failure" scenario of yours will not happen. You will always have to deal with network failure, disk failure, put something here failure etc.
What you have to do here is to lean on the messaging architecture and implement if possibile "idempotency" of your messages (which means that even if you process the message twice is not going to happen anything wrong, check this).
If you can't than you should provide some kind of "processed message" list (for example you can use a guid inside every message) and check this list every time you receive a message; you can simply discard them in this case.
To be more "theorical", this post from Brave New Geek is very interesting:
Within the context of a distributed system, you cannot have
exactly-once message delivery.
Hope it helps :)
I'm working with RabbitMQ and I'm confused using fanout exchange and convertSendAndReceive (or sendAndReceive) method of RabbitTemplate class.
For example, I have 2 consumers for durable queues QUEUE-01 and QUEUE-02 that are binded to durable fanout exchange FANOUT-01. And 1 publisher to FANOUT-01. I understand what happens when message is being published with convertAndSend (or send) method, message is going to be copied to each queue and will be processed by each consumer. But I'm not sure what will happen if I will call sendAndReceive method? Which consumer will I get a reply from? Is there any specific behaviour? I could not find any documentation on this.
The method sendAndReceive in the RabbitTemplate is used for when you would like to use RPC style messaging. There is an excellent tutorial here.
sendAndReceive() is not appropriate for fanout messaging; it is indeterminate as to which reply will win (the first one, generally). If you want to handle multiple replies and aggregate them you will need to use discrete send and receive calls (or listener container for the replies) and do the aggregation yourself.
Consider using Spring Integration for such situations. It has built-in components for aggregating messages.
I want to make my app resilient to connection issues that can happen when sending messages to rabbitmq. I want to get hold of all unsent messages, store them and send them latter, when rabbitmq becomes available.
Looking at the official documentation I didn't manage to figure out what is the difference between return callback and retry callback. I understand that retry callback is invoked when retry template exhausts configured policy but don't find this much useful since context doesn't contain message itself.
Based on "replyCode" that is passed in the "returnedMessage" method of ReturnCallback interface one can easily determine further behavior but haven't figured out when this callback is invoked.
One way to go is ConfirmCallback but there is an issue of having additional logic for keeping in sync CorrelationData and messages and statuses of those messages.
So...except ConfirmCallback interface is there any easier way to keep track of messages that are not successfully sent through RabbitMQ using Spring AMQP?
Returns are when the broker returns a message because it's undeliverable (no matching bindings on the exchange to which the message was published, and the mandatory bit is set).
Confirms are when the broker sends an ack back to the publisher, indicating that a message was successfully routed.
Retry is outside the broker realm so it is likely what you need for your use case.
I have a requirement that I need to process JMS Messages (via MDB) in a way that Messages belonging to a certain group (a group ID is set) are consumed by the same bean instance. The behaviour I require in this is that Messages with the same group ID are processed sequentially (though message ordering is irrelevant), and tying them to the same MDB-instance should provide that.
The messages do not carry any kind of sequence number (as it is irrelevant) and we do not know what the first or last message in a group is (there could theoratically "never" be a last message in a group). We want them to be delivered as soon as the consumer is able to receive them.
ActiveMQ provides this exact feature (http://activemq.apache.org/message-groups.html) by simply setting JMSXGroupID. We are bound to WebSphere MQ, though. All I've found out so far is that it is possible to collect Messages of the same group in the queue and using a MessageSelector to receive a "Last Message in Group" Message as described in http://www.ibm.com/developerworks/websphere/library/techarticles/0602_currie/0602_currie.html. We would prefer a cleaner way though (like in ActiveMQ). Does anyone know how to achieve that behaviour in WebSphere?
Thanks!
Generally you use MessageSelectors in IBM product implementations of JMS (both WebSphere MQ and SIBus implementations). These are the equivalent of a filter that would scan the header of an HTTP or SOAP message for web-based protocols.
Though it may not be what you want, it actually is a clean and well thought through design.
However, if you do not want to use MessageSelectors, you will probably have to build your own and "process" a message with a fronting MDB that scans the headers, and then forwards the message to an appropriate queue, where only the MDB that cares about the grouped messages will process them (sort of a gateway/message selector pattern).
If you are using the "pure" JMS API's, then you can ask the Session object to create a MessageConsumer with the specified selector string (value in the header) you want to filter on (again you have to set this yourself).
//assume we have created a JMS Connection and Session object already.
//and looked up the Queue we want already.
MessageConsumer consumerWithSelector = session.createConsumer(queue, groupId);
This is all the pure JMS API's give you. Anything else is purely up to the implementer of the messaging technology, which is then proprietary to their implementation and not portable code.