I am reading the documentation about Channel.basicCancel operation in rabbitmq https://www.rabbitmq.com/consumer-cancel.html . The docs says that one of possible cancellation case is when consumer sends cancel signal on the same channel on which it is listening.
Is this the only possibility? Can you cancel remote consumer running on different channel/connection/process?
I am trying to send the cancel request from another another process. When I do it ends with an exception java.io.IOException: Unknown consumerTag just like such operation was restricted to cancelling local consumers (on own channel or connection).
UPDATE:
I noticed that this "Unknown consumerTag" exception is a result of initial validation inside com.rabbitmq.client.impl.ChannelN.basicCancel(String):
Consumer originalConsumer = (Consumer)this._consumers.get(consumerTag);
if (originalConsumer == null) {
throw new IOException("Unknown consumerTag");
}
...
But still there might be some rpc call which does the trick...
The RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
The documentation is correct, you must cancel a consumer from its own channel/connection.
Other options include making your consumers aware of "cancellation messages" that will cause them to stop themselves, or using the API to close an entire connection, which will close all channels associated with it.
Related
Is it possible to rollback async processed message in ActiveMQ? I'm consuming next message while first one is still processing, so while I'm trying to rollback the first message on another (not activemq pool) thread, I'm getting above error. Eventually should I sednd message to DLQ manually?
Message error handling can work a couple ways:
Broker-side 'redelivery policy'. Where the client invokes a rollback n number (default is usually 6 retries) of times and the broker automatically moves the message to a Dead Letter Queue (DLQ)
Client-side. Application consumes the message and then produces to the DLQ.
Option #1 is good for unplanned/planned outages-- database down, etc. Where you want automatic retry. The re-delivery policy can also be configured when the client connects to the broker.
Option #2 is good for 'bad data' scenarios where you know the message will never be able to be processed. This is ideal, because you can move the message on the 1st consumption and not have to reject the message n number of times.
When you combine infinite retry with #1 and include #2 in your application flow, you can have a robust process flow of automatic retry, and move-bad-data-out-of-the-way-quickly. Best of breed =)
ActiveMQ Redelivery policy
Given the following code :
public void doSomething(Channel channel1){
DefaultConsumer consumer1 = new DefaultConsumer(channel1).
channel1.basicConsume("Queue1Name",..., "Consumer1Tag", consumer1)
}
If channl1 get's disconnected for some reason, which will cause messages to not be consumed(?) :
Can the same consumer instanse(consumer1) be used with a different channel in some way or is a new instance and a new channel required?
Will a redecleration of Queue1Name be aware of any changes? i.e -
DeclareOk queueResponse = someOtherChannel.queueDeclare("Queue1Name", ........);
int consumerCount = queueResponse.getConsumerCount()
What will be the consumer count?
The reason for this question is that i have a background thread running and checking connection to RabbitMQ once in a while. I check that channels and connections are open, and in case they are not - i reset them. I want to keep a single consumer on Queue1 and still be able to handle a scenario when i find it's channel is closed.
If channel1 gets disconnected it's messages won't be consumed, that is correct. They will remain in the Queue.
Yes you can reconnect to the same Queue, the messages left behind will have to be handled before you can receive the new messages though.
I think you need to remove the Queue before you can make a new one with the same name (I'm not 100% sure). Consumer count should be 1 after you reconnect. But if you do that, any messages left in the Queue will be lost.
You should really try this out yourself in your application and observe behaviour.
TLDR; In the context of a topic exchange and queues created on the fly by the consumers, how to have a message redelivered / the producer notified when no consumer consumes the message?
I have the following components:
a main service, producing files. Each file has a certain category (e.g. pictures.profile, pictures.gallery)
a set of workers, consuming files and producing a textual output from them (e.g. the size of the file)
I currently have a single RabbitMQ topic exchange.
The producer sends messages to the exchange with routing_key = file_category.
Each consumer creates a queue and binds the exchange to this queue for a set of routing keys (e.g. pictures.* and videos.trending).
When a consumer has processed a file, it pushes the result in a processing_results queue.
Now - this works properly, but it still has a major issue. Currently, if the publisher sends a message with a routing key that no consumer is bound to, the message will be lost. This is because even if the queue created by the consumers is durable, it is destroyed as soon as the consumer disconnects since it is unique to this consumer.
Consumer code (python):
channel.exchange_declare(exchange=exchange_name, type='topic', durable=True)
result = channel.queue_declare(exclusive = True, durable=True)
queue_name = result.method.queue
topics = [ "pictures.*", "videos.trending" ]
for topic in topics:
channel.queue_bind(exchange=exchange_name, queue=queue_name, routing_key=topic)
channel.basic_consume(my_handler, queue=queue_name)
channel.start_consuming()
Loosing a message in this condition is not acceptable in my use case.
Attempted solution
However, "loosing" a message becomes acceptable if the producer is notified that no consumer received the message (in this case it can just resend it later). I figured out the mandatory field could help, since the specification of AMQP states:
This flag tells the server how to react if the message cannot be routed to a queue. If this flag is set, the server will return an unroutable message with a Return method.
This is indeed working - in the producer, I am able to register a ReturnListener :
rabbitMq.confirmSelect();
rabbitMq.addReturnListener( (int replyCode, String replyText, String exchange, String routingKey, AMQP.BasicProperties properties, byte[] body) -> {
log.info("A message was returned by the broker");
});
rabbitMq.basicPublish(exchangeName, "pictures.profile", true /* mandatory */, MessageProperties.PERSISTENT_TEXT_PLAIN, messageBytes);
This will as expected print A message was returned by the broker if a message is sent with a routing key no consumer is bound to.
Now, I also want to know when the message was correctly received by a consumer. So I tried registering a ConfirmListener as well:
rabbitMq.addConfirmListener(new ConfirmListener() {
void handleAck(long deliveryTag, boolean multiple) throws IOException {
log.info("ACK message {}, multiple = ", deliveryTag, multiple);
}
void handleNack(long deliveryTag, boolean multiple) throws IOException {
log.info("NACK message {}, multiple = ", deliveryTag, multiple);
}
});
The issue here is that the ACK is sent by the broker, not by the consumer itself. So when the producer sends a message with a routing key K:
If a consumer is bound to this routing key, the broker just sends an ACK
Otherwise, the broker sends a basic.return followed by a ACK
Cf the docs:
For unroutable messages, the broker will issue a confirm once the exchange verifies a message won't route to any queue (returns an empty list of queues). If the message is also published as mandatory, the basic.return is sent to the client before basic.ack. The same is true for negative acknowledgements (basic.nack).
So while my problem is theoretically solvable using this, it would make the logic of knowing if a message was correctly consumed very complicated (especially in the context of multi threading, persistence in a database, etc.):
send a message
on receive ACK:
if no basic.return was received for this message
the message was correctly consumed
else
the message wasn't correctly consumed
on receive basic.return
the message wasn't correctly consumed
Possible other solutions
Have a queue for each file category, i.e. the queues pictures_profile, pictures_gallery, etc. Not good since it removes a lot of flexibility for the consumers
Have a "response timeout" logic in the producer. The producer sends a message. It expects an "answer" for this message in the processing_results queue. A solution would be to resend the message if it hasn't been answered to after X seconds. I don't like it though, it would create some additional tricky logic in the producer.
Produce the messages with a TTL of 0, and have the producer listen on a dead-letter exchange. This is the official suggested solution to replace the 'immediate' flag removed in RabbitMQ 3.0 (see paragraph Removal of "immediate" flag). According to the docs of the dead letter exchanges, a dead letter exchange can only be configured per-queue. So it wouldn't work here
[edit] A last solution I see is to have every consumer create a durable queue that isn't destroyed when he disconnects, and have it listen on it. Example: consumer1 creates queue-consumer-1 that is bound to the message of myExchange having a routing key abcd. The issue I foresee is that it implies to find an unique identifier for every consumer application instance (e.g. hostname of the machine it runs on).
I would love to have some inputs on that - thanks!
Related to:
RabbitMQ: persistent message with Topic exchange (not applicable here since queues are created "on the fly")
Make sure the broker holds messages until at least one consumer gets it
RabbitMQ Topic Exchange with persisted queue
[edit] Solution
I ended up implementing something that uses a basic.return, as mentioned earlier. It is actually not so tricky to implement, you just have to make sure that your method producing the messages and the method handling the basic returns are synchronized (or have a shared lock if not in the same class), otherwise you can end up with interleaved execution flows that will mess up your business logic.
I believe that an alternate exchange would be the best fit for your use case for the part regarding the identification of not routed messages.
Whenever an exchange with a configured AE cannot route a message to any queue, it publishes the message to the specified AE instead.
Basically upon creation of the "main" exchange, you configure an alternate exchange for it.
For the referenced alternate exchange, I tend to go with a fanout, then create a queue (notroutedq) binded to it.
This means any message that is not published to at least one of the queues bound to your "main" exchange will end up in the notroutedq
Now regarding your statement:
because even if the queue created by the consumers is durable, it is destroyed as soon as the consumer disconnects since it is unique to this consumer.
Seems that you have configured your queues with auto-delete set to true.
If so, in case of disconnect, as you stated, the queue is destroyed and the messages still present on the queue are lost, case not covered by the alternate exchange configuration.
It's not clear from your use case description whether you'd expect in some cases for a message to end up in more than one queue, seemed more a case of one queue per type of processing expected (while keeping the grouping flexible). If indeed the queue split is related to type of processing, I do not see the benefit of setting the queue with auto-delete, expect maybe not having to do any cleanup maintenance when you want to change the bindings.
Assuming you can go with durable queues, then a dead letter exchange (would again go with fanout) with a binding to a dlq would cover the missing cases.
not routed covered by alternate exchange
correct processing already handled by your processing_result queue
problematic processing or too long to be processed covered by the dead letter exchange, in which case the additional headers added upon dead lettering the message can even help to identify the type of actions to take
I'm debugging some Java code that uses Apache POI to pull data out of Microsoft Office documents. Occasionally, it encounter a large document and POI crashes when it runs out of memory. At that point, it tries to publish the error to RabbitMQ, so that other components can know that this step failed and take the appropriate actions. However, when it tries to publish to the queue, it gets a com.rabbitmq.client.AlreadyClosedException (clean connection shutdown; reason: Attempt to use closed channel).
Here's the error handler code:
try {
//Extraction and indexing code
}
catch(Throwable t) {
// Something went wrong! We'll publish the error and then move on with
// our lives
System.out.println("Error received when indexing message: ");
t.printStackTrace();
System.out.println();
String error = PrintExc.format(t);
message.put("error", error);
if(mime == null) {
mime = "application/vnd.unknown";
}
message.put("mime", mime);
publish("IndexFailure", "", MessageProperties.PERSISTENT_BASIC, message);
}
For completeness, here's the publish method:
private void publish(String exch, String route,
AMQP.BasicProperties props, Map<String, Object> message) throws Exception{
chan.basicPublish(exch, route, props,
JSONValue.toJSONString(message).getBytes());
}
I can't find any code within the try block that appears to close the RabbitMQ channel. Are there any circumstances in which the channel could be closed implicitly?
EDIT: I should note that the AlreadyClosedException is thrown by the basicPublish call inside publish.
An AMQP channel is closed on a channel error. Two common things that can cause a channel error:
Trying to publish a message to an exchange that doesn't exist
Trying to publish a message with the immediate flag set that doesn't have a queue with an active consumer set
I would look into setting up a ShutdownListener on the channel you're trying to use to publish a message using the addShutdownListener() to catch the shutdown event and look at what caused it.
Another reason in my case was that by mistake I acknowledged a message twice. This lead to RabbitMQ errors in the log like this after the second acknowledgment.
=ERROR REPORT==== 11-Dec-2012::09:48:29 ===
connection <0.6792.0>, channel 1 - error:
{amqp_error,precondition_failed,"unknown delivery tag 1",'basic.ack'}
After I removed the duplicate acknowledgement then the errors went away and the channel did not close anymore and also the AlreadyClosedException were gone.
I'd like to add this information for other users who will be searching for this topic
Another possible reason for Receiving a Channel Closed Exception is when Publishers and Consumers are accessing Channel/Queue with different queue declaration/settings
Publisher
channel.queueDeclare("task_queue", durable, false, false, null);
Worker
channel.queueDeclare("task_queue", false, false, false, null);
From RabbitMQ Site
RabbitMQ doesn't allow you to redefine an existing queue with different parameters and will return an error to any program that tries to do that
Apparently, there are many reasons for the AMQP connection and/or channels to close abruptly. In my case, there was too many unacknowledged messages on the queue because the consumer didn't specify the prefetch_count so the connection was getting terminated every ~1min. Limiting the number of unacknowledged messages by setting the consumer's prefetch count to a non-zero value fixed the problem.
channel.basicQos(100);
For those who wonder why their consuming channels are closing, check if you try to Ack or Nack a delivery more than once.
In the rabbitmq log you would see messages like:
operation basic.ack caused a channel exception precondition_failed:
unknown delivery tag ...
I also had this problem. The reason for my case was that, first I built the queue with durable = false and in the log file I had this error message when I switched durable to true:
"inequivalent arg 'durable' for queue 'logsQueue' in vhost '/':
received 'true' but current is 'false'"
Then, I changed the name of the queue and it worked for me. I assumed that the RabbitMQ server keeps the record of the built queues somewhere and it cannot change the status from durable to non-durable and vice versa.
Again I made durable=false for the new queue and this time I got this error
"inequivalent arg 'durable' for queue 'logsQueue1' in vhost '/':
received 'false' but current is 'true'"
My assumption was true. When I listed the queues in rabbitMQ server by:
rabbitmqctl list_queues
I saw both queues in the server.
To summarize, 2 solutions are:
1. renaming the name of the queue which is not a good solution
2. resetting rabbitMQ by:
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl start_app
What would be a nice and good way to temporarily disable a message listener? The problem I want to solve is:
A JMS message is received by a message listener
I get an error when trying to process the message.
I wait for my system to get ready again to be able to process the message.
Until my system is ready, I don't want any more messages, so...
...I want to disable the message listener.
My system is ready for processing again.
The failed message gets processed, and the JMS message gets acknowledged.
Enable the message listener again.
Right now, I'm using Sun App Server. I disabled the message listener by setting it to null in the MessageConsumer, and enabled it again using setMessageListener(myOldMessageListener), but after this I don't get any more messages.
How about if you don't return from the onMessage() listener method until your system is ready to process messages again? That'll prevent JMS from delivering another message on that consumer.
That's the async equivalent of not calling receive() in a synchronous case.
There's no multi-threading for a given JMS session, so the pipeline of messages is held up until the onMessage() method returns.
I'm not familiar with the implications of dynamically calling setMessageListener(). The javadoc says there's undefined behavior if called "when messages are being consumed by an existing listener or sync consumer". If you're calling from within onMessage(), it sounds like you're hitting that undefined case.
There are start/stop methods at the Connection level, if that's not too coarse-grained for you.
Problem solved by a workaround replacing the message listener by a receive() loop, but I'm still interested in how to disable a message listener and enable it shortly again.
That looks to me like the messages are being delivered but nothing is happening with them because you have no listener attached. It's been a while since I've done anything with JMS but don't you want to have the message sent to the dead letter queue or something while you fix the system, and then move the messages back onto the original queue once you're ready for processing again?
On WebLogic you can set up max retries, an error queue to handle messages that exceed the max retry limit, and other parameters. I'm not certain off the top of my head, but you also might be able to specify a wait period. All this is available to you in the admin console. I'd look at the admin for the JMS provider you've got and see if it can do something similar.
In JBoss the following code will do the trick:
MBeanServer mbeanServer = MBeanServerLocator.locateJBoss();
ObjectName objName = new ObjectName("jboss.j2ee:ear=MessageGateway.ear,jar=MessageGateway-EJB.jar,name=MessageSenderMDB,service=EJB3");
JMSContainerInvokerMBean invoker = (JMSContainerInvokerMBean) MBeanProxy.get(JMSContainerInvokerMBean.class, objName, mbeanServer);
invoker.stop(); //Stop MDB
invoker.start(); //Start MDB
I think you can call
messageConsumer.setMessageListener(null);
inside your MessageListener implementation and schedule the reestablishment task (for example in ScheduledExecutorService). This task should call
connection.stop();
messageConsumer.setMessageListener(YOUR_NEW_LISTENER);
connection.start();
and it will be working. start() and stop() methods are used for restarting delivery structrues (not TCP connection).
Read the Javadoc https://docs.oracle.com/javaee/7/api/javax/jms/Connection.html#stop--
Temporarily stops a connection's delivery of incoming messages. Delivery can be restarted using the connection's start method. When the connection is stopped, delivery to all the connection's message consumers is inhibited: synchronous receives block, and messages are not delivered to message listeners.
For temporarily stops a connection's delivery of incoming messages you need to use stop() method from Connection interface: https://docs.oracle.com/javaee/7/api/javax/jms/Connection.html#stop--
Just don't call connection.stop() from MessageListener because according to JMS spec. you will get deadlock or exception. Instead you can call connection.stop() from different thread, you just need to synchronize MessageListener and thread that going to suspend connection with function connection.stop()