My Java application sends messages to RabbitMQ exchange, then exchange redirects messages to binded queue.
I use Springframework AMQP java plugin with RabbitMQ.
The problem: message comes to queue, but it stays in "Unacknowledged" state, it never becomes "Ready".
What could be the reason?
An Unacknowledged message implies that it has been read by your consumer, but the consumer has never sent back an ACK to the RabbitMQ broker to say that it has finished processing it.
I'm not overly familiar with the Spring Framework plugin, but somewhere (for your consumer) you will be declaring your queue, it might look something like this (taken from http://www.rabbitmq.com/tutorials/tutorial-two-java.html):
channel.queueDeclare(queueName, ....)
then you will setup your consumer
bool ackMode = false;
QueueingConsumer consumer = new QueueingConsumer(channel);
channel.basicConsume(queueName, ackMode, consumer);
ackMode above is a boolean, by setting it to false, we're explicitly saying to RabbitMQ that my consumer will acknowledge each message it is given. If this flag was set to true, then you wouldn't be seeing the Unacknowledged count in RabbitMQ, rather as soon as a consumer has read the message off (i.e it has been delivered to the consumer it will remove it from the queue).
To acknowledge a message you would do something like this:
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
//...do something with the message...
channel.basicAck(delivery.getEnvelope().getDeliveryTag(), false); //the false flag is to do with multiple message acknowledgement
If you can post some of your consumer code then I might be able to help further...but in the mean time take a look at BlockingQueueConsumer specifically: the constructor you will see that you can set the AcknowledgeMode and also take a look at the nextMessage() this will return a Message object which contains a method called getDeliveryTag() this will return a Long which is the ID that you would send back on the basicAck
Just to add my 2 cents for another possible reason for messages staying in an unacknowledged state, even though the consumer makes sure to use the basicAck method-
Sometimes multiple instances of a process with an open RabbitMQ connection stay running, one of which may cause a message to get stuck in an unacknowledged state, preventing another instance of the consumer to ever refetch this message.
You can access the RabbitMQ management console (for a local machine this should be available at localhost:15672), and check whether multiple instances get hold of the channel, or if only a single instance is currently active:
Find the redundant running task (in this case - java) and terminate it. After removing the rogue process, you should see the message jumps to Ready state again.
Related
Java & RabbitMQ here. I need to implement sort of a poison pill pattern where, upon handling a particular message, the consumer needs to cancel itself and stop receiving/handling any further messages. Full stop and clean up. The message kills the consumer and releases the thread, memory, etc.
I see consumers have a handleCancel method that they can implement to respond to cancellation commands from the outside, but how do I handle a poison pill message inside a consumer that tells the consumer to fall over dead?
I don't think RabbitMQ handles this scenario for some reason.
My solution which appears to be working:
Implement a stateful consumer that exists in one of two states: Processing (default) and Terminating
When its in the Processing state it consumes and handles messages off the queue like normal. When it receives the magical poison pill (perhaps a value in the message header/properties, or maybe a specific value in the message itself) it sets its status to Terminating and does not process the message. It also uses an async event bus to send a custom ShutdownConsumerEvent to an external handler. This event is instantiated with both the channel and consumerTag sent to the consumer (e.g. ShutdownConsumerEvent event = new ShutdownConsumerEvent(channel, consumerTag);)
Any more messages the consumer receives while in the Terminating state get republished to the queue, with ACKs enabled so we don't lose them and have pseudo-transactionality
When the external ShutdownConsumerSubscriber (a registered handler to receive ShutdownConsumerEvents) receives the command to shut down the consumer, it does this by issuing a channel.basicCancel(consumerTag)
I'm implementing two services: A and B. I'm trying to implement a syncronous communication (Remote Procedure Call, or RPC) between A and B.
Scenario
Very simple. A needs a information from B, so A send a message and wait for a reply from B. A can't continue without this information
The question
I'm using the method rabbitTemplate.convertSendAndReceive from Spring RabbitMQ. This works as expected if the B is running. My code is very similar from this link.
If B is not running, A waits for a little time (few seconds) and receive as reply a null. In this case, I was expectating some exception saying that there is no consumer available.
The documentation says:
By default, the send and receive methods will timeout after 5 seconds
and return null. This can be modified by setting the replyTimeout
property. Starting with version 1.5, if you set the mandatory property
to true (or the mandatory-expression evaluates to true for a
particular message), if the message cannot be delivered to a queue an
AmqpMessageReturnedException will be thrown. This exception has
returnedMessage, replyCode, replyText properties, as well as the
exchange and routingKey used for the send.
I tried to set:
rabbitTemplate.setMandatory(true);
But any exception is throwed. I think because the queue is still alive in RabbitMQ, because there is some messages sent from A when B are out and they are waiting for to processed by B.
So, the null return is how I know that there is no consumer?
Another problem in this case: the message sent from A will be waiting in the queue until B consumes. But as I'm implementing a syncronous communication, this behaviour not make any sense if B is not running, because when B starts again it will consume and process without return the information to A. This will be a "lost" processing. Is this normal with RabbitMQ RPC communication?
I'm using Spring Boot 1.5.9 and the dependency spring-cloud-starter-stream-rabbit
Mandatory has nothing to with consumers; it's to ensure the message was delivered to a queue.
Yes, getting null is an indication that the operation timed out (5 seconds by default).
You can configure the queue with a time to live (ttl) so stale messages will be removed from the queue if they are not processed within that time.
Using RabbitMQ, I have two types of consumers: FileConsumer writes messages to file and MailConsumer mails messages. There may be multiple consumers of each type, say three running MailConsumers and one FileConsumer instance.
How can I do this:
Each published message should be handled by exactly one FileConsumer instance and one MailConsumer instance
Publishing a message should be done once, not one time for each queue (if possible)
If there are no consumers connected, messages should be queued until consumed, not dropped
What type of exchange etc should I use to get this behavior? I'd really like to see some example/pseudo-code to make this clear.
This should be easy to do, but I couldn't figure it out from the docs. It seems the fanout example should work, but I'm confused with these "anonymous queues" which seems like it will lead to sending same message into each consumer.
If you create queue without auto-delete flag, then queues will stay alive even after consumers disconnection.
Note, that if you declare queue as persistent, it will be present even after broker restart.
If you will publish then messages with delivery-mode=2 property set (that mean that message will be persistent), such messages will stay in persistent (this is important to make queue persistent) queues even after broker restart.
Using fanout exchange type is not mandatory. You can also use topic for better message routing handling if you need that.
UPD: step-by-step way to get what you show with schema.
Declare persistent exchange, say main, as exchange.declare(exchange-name=main, type=fanout, durable=true).
Delcare two queues, say, files and mails as queue.declare(queue-name=files, durable=true) and queue.declare(queue-name=mails, durable=true)
Bind both queues to exchange as queue.bind(queue-name=files, exchange-name=main) and queue.bind(queue-name=mails, exchange-name=main).
At this point you can publish messages to main exchange (see note about delivery-mode above) and consume with any consumer number from queues, from files with FileConsumer and from mails with MailConsumer. Without any consumers on queues messages will be queued and stay in queue until they consumed (or broker restart is they are not persistent).
I am trying to solve the following case:
I am consuming messages, but take an outage in a system I am depending on for proper message processing (say a Database for example)
I am using CLIENT_ACKNOWLEDGE, and only calling the .acknowledge() method when no exception is thrown.
This works fine when I throw an exception, messages are not acknowledged, and I can see the unacknowledged queue building up. However, these messages have all already been delivered to the consumer.
Suppose now the Database comes back online, and any new message is processed successfully. So I call .acknowledge on them. I read that calling .acknowledge() acknowledges not only that message, but also all previously received messages in the consumer.
This is not what I want! I need these previously unacknowledged messages to be redelivered / retried. I would like to keep them on the queue and let JMS handle the retry, since maintaining a Collection in the consumer of "messages to be retried" might put at risk losing those messages ( since .acknowledge already ack'ed all of them + say the hardware failed).
Is there a way to explicitly acknowledge specific messages and not have this "acknowledge all prior messages" behavior?
Acknowledging specific message is not defined by JMS specification. Hence some JMS implementers provide per messaging acknowledging and some don't. You will need to check your JMS provider documentation.
Message queues generally will have an option on how the messages are delivered to a client, either First in first out (FIFO) or Priority based. Choose FIFO option so that all messages are delivered in the same order they came into a queue. When database goes offline and comes back, call recover method to redeliver all messages in the same order again.
You need to call recover on your session after the failure to restart message delivery from the first unacked message. From the JMS 1.1 spec section 4.4.11
When CLIENT_ACKNOWLEDGE mode is used, a client may build up a large
number of unacknowledged messages while attempting to process them. A
JMS provider should provide administrators with a way to limit client
over-run so that clients are not driven to resource exhaustion and
ensuing failure when some resource they are using is temporarily
blocked.
A session’s recover method is used to stop a session and restart it
with its first unacknowledged message. In effect, the session’s series
of delivered messages is reset to the point after its last
acknowledged message. The messages it now delivers may be different
from those that were originally delivered due to message expiration
and the arrival of higher-priority messages.
On my JMS applications we use temporary queues on Producers to be able to receive replies back from Consumer applications.
I am facing exactly same issue on my end as mentioned in this thread: http://activemq.2283324.n4.nabble.com/jira-Created-AMQ-3336-Temporary-Destination-errors-on-H-A-failover-in-broker-network-with-Failover-tt-td3551034.html#a3612738
Whenever I restarted an arbitrary broker in my network, I was getting many errors like this in my Consumer application log while trying to send reply to a temporary queue:
javax.jms.InvalidDestinationException:
Cannot publish to a deleted Destination: temp-queue://ID:...
Then I saw Gary's response there suggesting to use
jms.watchTopicAdvisories=false
as a url param on the client brokerURL. I promptly changed my client broker URLs with this additional parameter. However now I am seeing errors like this when I restart my brokers in network for this failover testing:
javax.jms.JMSException:
The destination temp-queue:
//ID:client.host-65070-1308610734958-2:1:1 does not exist.
I am using ActiveMQ 5.5 version. And my client broker URL looks like this:
failover:(tcp://amq-host1:61616,tcp://amq-host2.tred.aol.com:61616,tcp://amq-host3:61616,tcp://amq-host4:61616)?jms.useAsyncSend=true&timeout=5000&jms.watchTopicAdvisories=false
Additionally here is my activemq config XML for one of the 4 brokers:
amq1.xml
Can someone here please look into this problem and suggest me what mistake I am making in this setup.
Update:
To clarify further on how I am doing request-response in my code:
I already use a per producer destination (i.e. temporary queue) and set this in reply-to header of every message.
I am already sending a per message unique correlation identifier in JMSCorrelationID header.
As far as I know even Camel and Spring are also using temporary queue for request-response mechanism. Only difference is that Spring JMS implementation creates and destroys temporary queue for every message whereas I create temporary queue for the lifetime of the producer. This temporary queue is destroyed when client (producer) app shutsdown or by the AMQ broker when it realizes there are no active producer attached with this temporary queue.
I am already setting a message expiry on each message on Producer side so that message is not held up in a queue for too long (60 sec).
There is a broker attribute, org.apache.activemq.broker.BrokerService#cacheTempDestinations that should help in the failover: case.
Set that to true in xml configuration, and a temp destination will not be removed immediately when a client disconnects.
A fast failover: reconnect will be able to producer and/or consume from the temp queue again.
There is a timer task based on timeBeforePurgeTempDestinations (default 5 seconds) that handles cache removal.
One caveat though, I don't see any tests in activemq-core that make use of that attribute so I can't give you any guarantee on this one.
Temporary queues are created on the broker to which the requestor (producer) in your request-reply scenario connects. They are created from a javax.jms.Session, so on that session disconnecting, either because of client disconnect or broker failure/failover, those queues are permanently gone. None of the other brokers will understand what is meant when one of your consumers attempts to reply to those queues; hence your exception.
This requires an architectural shift in mindset assuming that you want to deal with failover and persist all your messages. Here is a general way that you could attack the problem:
Your reply-to headers should refer to a queue specific to the requestor process: e.g. queue:response.<client id>. The client id might be a standard name if you have a limited number of clients, or a UUID if you have a large number of these.
The outbound message should set a correlation identifier (simply a sting that lets you associate a request with a response - requestors after all might make more than one request at the same time). This is set in the JMSCorrelationID header, and ought to be copied from the request to the response message.
The requestor needs to set up a listener on that queue that will return the message body to the requesting thread based on that correllation id. There is some multithreading code that needs to be written for this, as you'll need to manually manage something like a map of correlation ids to originating threads (via Futures perhaps).
This is a similar approach to that taken by Apache Camel for request-response over messaging.
One thing to be mindful of is that the queue will not go away when the client does, so you should set a time to live on the response message such that it gets deleted from the broker if it has not been consumed, otherwise you will get a backlog of unconsumed messages. You will also need to set up a dead letter queue strategy to automatically discard expired messages.