Springs integration's reply correlation process details - java

I can't find documentation for reply processing with gateways and service activators.
If I have gateway which:
1) sends requests to channel ReqChannel
2) accepts replies on channel RepChannel
ReqChannel is connected to router, that routes incoming messages to one of some service activators, let say AServiceActivator and BServiceActivator and that service activators have a configured output-channel="RepChannel".
And if I execute more than one method call on gateway's interface asynchronously or simultaneously from different threads, how gateway will correlate incoming replies to actual service caller?

The gateway creates a temporary reply channel and puts it in the header of the message. This mechanism provides the necessary correlation because each message gets its own reply channel.
If the final consumer (say a service-activator) has no output-channel, the framework automatically sends the reply to the replyChannel header.
For this reason, it is generally not necessary to declare a reply-channel on the gateway for the final consumer to send to.
However, there are times when this is useful - such as if you want to wire-tap the reply channel, or make it a publish-subscribe channel, so the result goes to multiple places.
In this case (when there is a reply-channel on the gateway, and the final consumer sends a message there), the framework simply bridges the explicitly declared reply-channel to the temporary reply channel in the message header.
For this reason, it is critical to retain the replyChannel header in your flow. You can't send some arbitrary reply to a reply-channel, unless you include the original message's replyChannel header.

Related

How to handle response timeout in activeMq when using two queues for communication between two applications

Say you have an Application A and an Application B that communicates together using ActiveMQ queues. The communication happens as below.
A sends a request message to application B using the queue name
com.example.requestQueue
B consumes the message request from the queue name com.example.requestQueue
B takes some time to handle the message request and then sends a
response back to B using the response queue
name com.example.responseQueue
A consumes the response message from com.example.responseQueue queue and is done
If application B is always answering, there is no problem. 
But if for some reason the application B consumes a message from the request queue com.example.requestQueue and never puts a response message in the response queue com.example.responseQueue, application A will wait forever. 
Is there any way to solve this kind of problem please?
NB: The application A is written with Java and Camel and the application B is Written in C++
Thanks.
Camel supports request-reply flows in a single route (exchange pattern InOut), or you can break the request-reply into two separate routes (both exchange pattern InOnly) depending on your use case.
The request-reply patterns have timeout settings available based on the Camel component used. Add the timeout to the Application A Camel route request-reply.
ref: SJMS Component - Newer JMS component
ref: JMS Component - Original JMS component
ref: Request Reply pattern - Info on InOut patterns
Side note--
If Application A is also expected to return something to a caller (ie a web app or a REST/SOAP client), than you would want to make sure you set the messaging response timeout to be lower than than the timeout used by the caller. This allows Application A to return a proper exception/error to the caller before the caller's timeout occurs.

Making sure a message published on a topic exchange is received by at least one consumer

TLDR; In the context of a topic exchange and queues created on the fly by the consumers, how to have a message redelivered / the producer notified when no consumer consumes the message?
I have the following components:
a main service, producing files. Each file has a certain category (e.g. pictures.profile, pictures.gallery)
a set of workers, consuming files and producing a textual output from them (e.g. the size of the file)
I currently have a single RabbitMQ topic exchange.
The producer sends messages to the exchange with routing_key = file_category.
Each consumer creates a queue and binds the exchange to this queue for a set of routing keys (e.g. pictures.* and videos.trending).
When a consumer has processed a file, it pushes the result in a processing_results queue.
Now - this works properly, but it still has a major issue. Currently, if the publisher sends a message with a routing key that no consumer is bound to, the message will be lost. This is because even if the queue created by the consumers is durable, it is destroyed as soon as the consumer disconnects since it is unique to this consumer.
Consumer code (python):
channel.exchange_declare(exchange=exchange_name, type='topic', durable=True)
result = channel.queue_declare(exclusive = True, durable=True)
queue_name = result.method.queue
topics = [ "pictures.*", "videos.trending" ]
for topic in topics:
channel.queue_bind(exchange=exchange_name, queue=queue_name, routing_key=topic)
channel.basic_consume(my_handler, queue=queue_name)
channel.start_consuming()
Loosing a message in this condition is not acceptable in my use case.
Attempted solution
However, "loosing" a message becomes acceptable if the producer is notified that no consumer received the message (in this case it can just resend it later). I figured out the mandatory field could help, since the specification of AMQP states:
This flag tells the server how to react if the message cannot be routed to a queue. If this flag is set, the server will return an unroutable message with a Return method.
This is indeed working - in the producer, I am able to register a ReturnListener :
rabbitMq.confirmSelect();
rabbitMq.addReturnListener( (int replyCode, String replyText, String exchange, String routingKey, AMQP.BasicProperties properties, byte[] body) -> {
log.info("A message was returned by the broker");
});
rabbitMq.basicPublish(exchangeName, "pictures.profile", true /* mandatory */, MessageProperties.PERSISTENT_TEXT_PLAIN, messageBytes);
This will as expected print A message was returned by the broker if a message is sent with a routing key no consumer is bound to.
Now, I also want to know when the message was correctly received by a consumer. So I tried registering a ConfirmListener as well:
rabbitMq.addConfirmListener(new ConfirmListener() {
void handleAck(long deliveryTag, boolean multiple) throws IOException {
log.info("ACK message {}, multiple = ", deliveryTag, multiple);
}
void handleNack(long deliveryTag, boolean multiple) throws IOException {
log.info("NACK message {}, multiple = ", deliveryTag, multiple);
}
});
The issue here is that the ACK is sent by the broker, not by the consumer itself. So when the producer sends a message with a routing key K:
If a consumer is bound to this routing key, the broker just sends an ACK
Otherwise, the broker sends a basic.return followed by a ACK
Cf the docs:
For unroutable messages, the broker will issue a confirm once the exchange verifies a message won't route to any queue (returns an empty list of queues). If the message is also published as mandatory, the basic.return is sent to the client before basic.ack. The same is true for negative acknowledgements (basic.nack).
So while my problem is theoretically solvable using this, it would make the logic of knowing if a message was correctly consumed very complicated (especially in the context of multi threading, persistence in a database, etc.):
send a message
on receive ACK:
if no basic.return was received for this message
the message was correctly consumed
else
the message wasn't correctly consumed
on receive basic.return
the message wasn't correctly consumed
Possible other solutions
Have a queue for each file category, i.e. the queues pictures_profile, pictures_gallery, etc. Not good since it removes a lot of flexibility for the consumers
Have a "response timeout" logic in the producer. The producer sends a message. It expects an "answer" for this message in the processing_results queue. A solution would be to resend the message if it hasn't been answered to after X seconds. I don't like it though, it would create some additional tricky logic in the producer.
Produce the messages with a TTL of 0, and have the producer listen on a dead-letter exchange. This is the official suggested solution to replace the 'immediate' flag removed in RabbitMQ 3.0 (see paragraph Removal of "immediate" flag). According to the docs of the dead letter exchanges, a dead letter exchange can only be configured per-queue. So it wouldn't work here
[edit] A last solution I see is to have every consumer create a durable queue that isn't destroyed when he disconnects, and have it listen on it. Example: consumer1 creates queue-consumer-1 that is bound to the message of myExchange having a routing key abcd. The issue I foresee is that it implies to find an unique identifier for every consumer application instance (e.g. hostname of the machine it runs on).
I would love to have some inputs on that - thanks!
Related to:
RabbitMQ: persistent message with Topic exchange (not applicable here since queues are created "on the fly")
Make sure the broker holds messages until at least one consumer gets it
RabbitMQ Topic Exchange with persisted queue
[edit] Solution
I ended up implementing something that uses a basic.return, as mentioned earlier. It is actually not so tricky to implement, you just have to make sure that your method producing the messages and the method handling the basic returns are synchronized (or have a shared lock if not in the same class), otherwise you can end up with interleaved execution flows that will mess up your business logic.
I believe that an alternate exchange would be the best fit for your use case for the part regarding the identification of not routed messages.
Whenever an exchange with a configured AE cannot route a message to any queue, it publishes the message to the specified AE instead.
Basically upon creation of the "main" exchange, you configure an alternate exchange for it.
For the referenced alternate exchange, I tend to go with a fanout, then create a queue (notroutedq) binded to it.
This means any message that is not published to at least one of the queues bound to your "main" exchange will end up in the notroutedq
Now regarding your statement:
because even if the queue created by the consumers is durable, it is destroyed as soon as the consumer disconnects since it is unique to this consumer.
Seems that you have configured your queues with auto-delete set to true.
If so, in case of disconnect, as you stated, the queue is destroyed and the messages still present on the queue are lost, case not covered by the alternate exchange configuration.
It's not clear from your use case description whether you'd expect in some cases for a message to end up in more than one queue, seemed more a case of one queue per type of processing expected (while keeping the grouping flexible). If indeed the queue split is related to type of processing, I do not see the benefit of setting the queue with auto-delete, expect maybe not having to do any cleanup maintenance when you want to change the bindings.
Assuming you can go with durable queues, then a dead letter exchange (would again go with fanout) with a binding to a dlq would cover the missing cases.
not routed covered by alternate exchange
correct processing already handled by your processing_result queue
problematic processing or too long to be processed covered by the dead letter exchange, in which case the additional headers added upon dead lettering the message can even help to identify the type of actions to take

AWS SQS: Is it a way SQS call me consumer, every time a messages is pushed

Is there a way by which AWS SQS can call my REST API? Basically as soon as message is pushed to AWS SQS, I want to hear it and perform required action. I can schedule a listener that can pull messages every second but that won't be an optimizes solution and also the queue might be empty(sometimes).
Thanks In Advance!!
A couple of thoughts:
Use Publisher/Subscriber
Look into using a publisher-subscriber model with SNS/SQS, so that you publish a message to SNS and subscribe to it via SQS. If you absolutely need to handle a message as soon as it is published, you can publish to SNS and set another consumer in addition to your SQS subscription (such as a lambda subscriber that calls your Rest API?) to process it instead.
SQS Long Polling
Regarding SQS, it sounds like you would benefit from long polling. From the documentation:
Long polling helps reduce your cost of using Amazon SQS by reducing
the number of empty responses (when there are no messages available to
return in reply to a ReceiveMessage request sent to an Amazon SQS
queue) and eliminating false empty responses (when messages are
available in the queue but aren't included in the response):
Long polling reduces the number of empty responses by allowing Amazon
SQS to wait until a message is available in the queue before sending a
response. Unless the connection times out, the response to the
ReceiveMessage request contains at least one of the available
messages, up to the maximum number of messages specified in the
ReceiveMessage action.
Long polling eliminates false empty responses by querying all (rather than a limited number) of the servers.
Long polling returns messages as soon any message becomes available.
Also from the documentation, to enable long polling programmatically, use the following for any of these SQS actions:
ReceiveMessage: WaitTimeSeconds parameter
CreateQueue: ReceiveMessageWaitTimeSeconds attribute
SetQueueAttributes: ReceiveMessageWaitTimeSeconds attribute
Reference:
Publish–subscribe (PubSub) Pattern
SQS Documentation - Long Polling
Sounds like you would be much better of using SNS instead of SQS. What you are trying to get SQS to do, SNS was designed to do:
You can use Amazon SNS to send notification messages to one or more
HTTP or HTTPS endpoints. When you subscribe an endpoint to a topic,
you can publish a notification to the topic and Amazon SNS sends an
HTTP POST request delivering the contents of the notification to the
subscribed endpoint. When you subscribe the endpoint, you select
whether Amazon SNS uses HTTP or HTTPS to send the POST request to the
endpoint. If you use HTTPS, then you can take advantage of the support
in Amazon SNS for the following...
http://docs.aws.amazon.com/sns/latest/dg/SendMessageToHttp.html

RabbitMQ RPC tutorial query

I was going through the tutorial shared by RabbitMQ here
I am assuming that the client code below
while (true)
{
var ea = (BasicDeliverEventArgs)consumer.Queue.Dequeue();
if (ea.BasicProperties.CorrelationId == corrId)
{
return Encoding.UTF8.GetString(ea.Body);
}
}
Would receive all messages on the queue and will unnecessarily iterate through messages not designated for it. Is their anyway we can avoid it i.e we can modify the client to only receive the messages intended for it only.
The basic work that i intend to achieve through RabbitMQ is Request-Response pattern where a request would be received by web-service which will send data in a queue the data object would have a unique reference number . This would be received by an asynchronous tcp-client which will send data on a tcp/ip layer based on message it had received.
On receiving reply from the asynchronous channel of tcp/ip the channel would parse the data and respond back on the queue with the corresponding request reference number.
The RPC approach is well suited for it but the client code shared have this shortcoming would appreciate feedback on it.
Actually I didn’t understand well your aim, but when you create an RPC model, you have to create an “reply queue”, this queue is bound only to the client.
It means that you will receive back only the client messages, and not all messages.
Since the Rabbitmq RPC model is asynchronous you can execute more than one request without wait the responses and replies could not have the same publish order.
The correlation id is necessary to map your client requests with the replies, so there are not "unnecessarily" messages
hope it helps

How to make a JMS Synchronous request

I have an webapp that is expected to fetch and display data from an External App which is accessible only via messaging (JMS).
So, if a user submits a request on a browser, the same HTTP request thread will have to interact with the Messaging system (MQ Series) such that the same request thread can display the data received from the Messaging System.
Is there a pattern I can make use of here? I saw some vague references on the net that use "Correlation ID" in this way:
Msg m = new TextMsg("findDataXYZ");
String cr_id = m.setCorrelationID(id);
sendQueue.send(m).
// now start listening to the Queue for a msg that bears that specific cr_id
Response r = receiverQueue.receive(cr_id);
Is there something better out there? The other patterns I found expect the response to be received asynchronously.. which is not an option for me, since I have to send the response back on the same HTTP request.
The request/reply messaging pattern is useful for your requirement. You typically use a CorrelationId to relate request & reply messages.
While sending request message you set JMSReplyTo destination on the message. Typically a temporary queue is used as JMSReplyTo destination. When creating a consumer to receive response use a selector with JMSCorrelationId, something like
cons = session.createConsumer(tempDestination,"JMSCorrelationId="+requestMsg.JMSMessageId);
At the other end, the application that is processing the request message must use the JMSReplyTo destination to send response. It must also use the MessageId of the request message and set it as CorrelationId of the response message.
First, open the response queue. Then pass that object to the set reply-to method on the message. That way the service responding to your request knows where to send the reply. Typically the service will copy the message ID to the correlation ID field so when you send the message, take the message ID you get back and use that to listen on the reply queue. Of course if you use a dynamic reply-to queue even that isn't neessary - just listen for the next message on the queue.
There's sample code that shows all of this. If you installed to the default location, the sample code lives at "C:\Program Files (x86)\IBM\WebSphere MQ\tools\jms\samples\simple\SimpleRequestor.java" on a Windows box or /var/mqm/toolsjms/samples/simple/SimpleRequestor.java on a *nix box.
And on the off chance you are wondering "install what, exactly?" the WMQ client install is downloadable for free as SupportPac MQC71.

Categories