Artemis is throwing Cannot delete queue on binding - it has consumers - java

We have multiple application servers in a cluster connecting to an instance of ActiveMQ Artemis 2.17.0. One of the applications will act as master and the remaining application servers will act as slave nodes.
The master will try to do drop the queue and recreates the queue before producing or consuming the messages as part of the data processing cycle.
Below is the exception stack we are observing in Artemis logs
2022-01-03 19:14:55,603 WARN [org.apache.activemq.artemis.core.server] Errors occurred during the buffering operation : ActiveMQIllegalStateException[errorType=ILLEGAL_STATE message=AMQ229025: Cannot delete queue -1.36.1.level-1.queue on binding -1.36.1.level-1.queue - it has consumers = org.apache.activemq.artemis.core.postoffice.impl.LocalQueueBinding]
at org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.destroyQueue(ActiveMQServerImpl.java:2338) [artemis-server-2.17.0.jar:2.17.0]
at org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.destroyQueue(ActiveMQServerImpl.java:2306) [artemis-server-2.17.0.jar:2.17.0]
at org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.destroyQueue(ActiveMQServerImpl.java:2297) [artemis-server-2.17.0.jar:2.17.0]
at org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.destroyQueue(ActiveMQServerImpl.java:2277) [artemis-server-2.17.0.jar:2.17.0]
at org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.destroyQueue(ActiveMQServerImpl.java:2270) [artemis-server-2.17.0.jar:2.17.0]
at org.apache.activemq.artemis.core.protocol.openwire.OpenWireConnection.removeDestination(OpenWireConnection.java:1062) [artemis-openwire-protocol-2.17.0.jar:2.17.0]
at org.apache.activemq.artemis.core.protocol.openwire.OpenWireConnection$CommandProcessor.processRemoveDestination(OpenWireConnection.java:1206) [artemis-openwire-protocol-2.17.0.jar:2.17.0]
at org.apache.activemq.command.DestinationInfo.visit(DestinationInfo.java:124) [activemq-client-5.16.0.jar:5.16.0]
at org.apache.activemq.artemis.core.protocol.openwire.OpenWireConnection.act(OpenWireConnection.java:323) [artemis-openwire-protocol-2.17.0.jar:2.17.0]
at org.apache.activemq.artemis.utils.actors.Actor.doTask(Actor.java:33) [artemis-commons-2.17.0.jar:2.17.0]
at org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:65) [artemis-commons-2.17.0.jar:2.17.0]
at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:42) [artemis-commons-2.17.0.jar:2.17.0]
at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31) [artemis-commons-2.17.0.jar:2.17.0]
at org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:65) [artemis-commons-2.17.0.jar:2.17.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_292]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_292]
at org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118) [artemis-commons-2.17.0.jar:2.17.0]
Could you tell me when this would happen and how to avoid it?
Please find below sample code used in dropping the queue:
public void dropQueue() throws JMSException {
try {
if (connection instanceof ActiveMQConnection && destination instanceof ActiveMQDestination) {
((ActiveMQConnection)connection).destroyDestination((ActiveMQDestination)destination);
} else {
log.info("Dropping queue : " + queueName + " not supported.");
}
} catch(JMSException e) {
throw e;
}
}

The broker is logging this WARN message (and not deleting the queue) because the queue you are trying to delete has consumers on it just as the WARN message indicates:
Cannot delete queue...it has consumers
You can avoid this either by not deleting the queue or removing all the consumers from that queue before deleting it.
In general it's not a good idea for clients to delete queues on the broker since queues are shared resources and may be used by other clients.

Related

How to trigger rabbit mq shutdown signal for testing?

Faced an issue on prod system where 1 message was left unacked for 30 mins which lead to consumer being shutdown. Now I have added shutdownlisteners as described in rabbit mq docs -
https://rabbitmq.github.io/rabbitmq-java-client/api/4.x.x/com/rabbitmq/client/ShutdownListener.html
if (cause.isHardError()) {
log.error("Connection error with cause : {}", cause);
Connection conn = (Connection) cause.getReference();
if (!cause.isInitiatedByApplication()) {
Method reason = cause.getReason();
log.error("Rabbit Mq Consumer Connection Shutdown : {} {}", reason, cause);
}
} else{
Channel ch = (Channel)cause.getReference();
log.error("Channel error details : {}", ch);
}
});
The issue is it's not getting invoked at all in testing. I tried triggering it through 2 ways-
Through unacked delivery timeout. Basically threw a general excption and never acked it(these were the original conditions of the bug). However, this didn't work.
I used channel.close() to shutdown the consumer but still didn't receive an event.
Looking for any way to replicate the issue I faced and test/trigger the shutdownlisteners. Thanks

Apache Camel - Seda endpoint, recipientList, BlockWhenFull=true and Queue full IllegalStateException

I have a code that retrieves messages from a RabbitMQ queue, aggregates them and then distributes the aggregates to another route that will dispatch them to differents routes via the recipientList component.
The problem is that the latter throws the "Error executing reactive work due to Queue full" exception despite the addition of the blockWhenFull=true property on the producer side.
from("direct:rabbitmq-ids-aggregate")
.aggregate(constant(true), new UpdatesAggregationStrategy())
.completionInterval("{{updates.aggregation.completionInterval}}")
.completionSize("{{updates.aggregation.completionSize}}")
.setHeader(CORRELATION_HEADER, simple("${exchangeId}"))
.to("seda:dispatch?blockWhenFull=true");
from("seda:dispatch")
.recipientList(simple("{{routes.hr-data}},{{routes.comments-data}},{{routes.legacy-hr-data}},{{routes.ranking-data}}"))
.end();
2023-01-10 19:46:35,944 WARN o.a.c.i.e.DefaultReactiveExecutor [Camel (integration-core) thread #5 - seda://dispatch] Error executing reactive work due to Queue full. This exception is ignored.
java.lang.IllegalStateException: Queue full
at java.util.AbstractQueue.add(AbstractQueue.java:98) ~[?:?]
at org.apache.camel.component.seda.SedaProducer.addToQueue(SedaProducer.java:251) ~[camel-seda-3.15.0.jar:3.15.0]
at org.apache.camel.component.seda.SedaProducer.process(SedaProducer.java:149) ~[camel-seda-3.15.0.jar:3.15.0]
at org.apache.camel.processor.errorhandler.RedeliveryErrorHandler$SimpleTask.run(RedeliveryErrorHandler.java:471) ~[camel-core-processor-3.15.0.jar:3.15.0]
at org.apache.camel.impl.engine.DefaultReactiveExecutor$Worker.schedule(DefaultReactiveExecutor.java:187) [camel-base-engine-3.15.0.jar:3.15.0]
at org.apache.camel.impl.engine.DefaultReactiveExecutor.scheduleMain(DefaultReactiveExecutor.java:64) [camel-base-engine-3.15.0.jar:3.15.0]
at org.apache.camel.processor.Pipeline.process(Pipeline.java:184) [camel-core-processor-3.15.0.jar:3.15.0]
at org.apache.camel.impl.engine.CamelInternalProcessor.process(CamelInternalProcessor.java:399) [camel-base-engine-3.15.0.jar:3.15.0]
at org.apache.camel.component.seda.SedaConsumer.sendToConsumers(SedaConsumer.java:269) [camel-seda-3.15.0.jar:3.15.0]
at org.apache.camel.component.seda.SedaConsumer.doRun(SedaConsumer.java:187) [camel-seda-3.15.0.jar:3.15.0]
at org.apache.camel.component.seda.SedaConsumer.run(SedaConsumer.java:130) [camel-seda-3.15.0.jar:3.15.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
at java.lang.Thread.run(Thread.java:833) [?:?]
The blockWhenFull=true property on the producer side is not working because it is being set on the wrong endpoint (direct:rabbitmq-ids-aggregate) instead of seda:dispatch. To fix this issue, you should add the blockWhenFull=true property on the seda:dispatch endpoint like this:
from("seda:dispatch?blockWhenFull=true")
.recipientList(simple("{{routes.hr-data}},{{routes.comments-data}},
{{routes.legacy-hr-data}},{{routes.ranking-data}}")).end();
This will tell Camel to block and wait for the seda:dispatch queue to have space available before adding new messages to it, which should prevent the "Error executing reactive work due to Queue full" exception from being thrown.

io.smallrye.mutiny.TimeoutException when using kafka vs redis

I'm using kafka + redis in my project.
I get message from Kafka, process and save to redis, but it is giving error like below when my code runs after some time my code
io.smallrye.mutiny.TimeoutException
at io.smallrye.mutiny.operators.uni.UniBlockingAwait.await(UniBlockingAwait.java:64)
at io.smallrye.mutiny.groups.UniAwait.atMost(UniAwait.java:65)
at io.quarkus.redis.client.runtime.RedisClientImpl.await(RedisClientImpl.java:1046)
at io.quarkus.redis.client.runtime.RedisClientImpl.set(RedisClientImpl.java:687)
at worker.redis.process.implementation.ProductImplementation.refresh(ProductImplementation.java:34)
at worker.redis.Worker.refresh(Worker.java:51)
at
kafka.InComingProductKafkaConsume.lambda$consume$0(InComingProductKafkaConsume.java:38)
at business.core.hpithead.ThreadStart.doRun(ThreadStart.java:34)
at business.core.hpithead.core.NotifyingThread.run(NotifyingThread.java:27)
at java.base/java.lang.Thread.run(Thread.java:833)
The record 51761 from topic-partition 'mer-outgoing-master-item-0' has waited for 153 seconds to be acknowledged. This waiting time is greater than the configured threshold (150000 ms). At the moment 2 messages from this partition are awaiting acknowledgement. The last committed offset for this partition was 51760. This error is due to a potential issue in the application which does not acknowledged the records in a timely fashion. The connector cannot commit as a record processing has not completed.
#Incoming("mer_product")
#Blocking
public CompletionStage<Void> consume2(Message<String> payload) {
var objectDto = configThreadLocal.mapper.readValue(payload.getPayload(), new TypeReference<KafkaPayload<ItemKO>>(){});
worker.refresh(objectDto.payload.castDto());
return payload.ack();
}

AMQP Channel shutdown but Consumer not always restart

I have a frequent Channel shutdown: connection error issues (under 24.133.241:5671 thread, name is truncated) in RabbitMQ Java client (my producer and consumer are far apart). Most of the time consumer is automatically restarted as I have enabled heartbeat (15 seconds). However, there were some instances only Channel shutdown: connection error but no Consumer raised exception and no Restarting Consumer (under cTaskExecutor-4 thread).
My current workaround is to restart my application. Anyone can shed some light on this matter?
2017-03-20 12:42:38.856 ERROR 24245 --- [24.133.241:5671] o.s.a.r.c.CachingConnectionFactory
: Channel shutdown: connection error
2017-03-20 12:42:39.642 WARN 24245 --- [cTaskExecutor-4] o.s.a.r.l.SimpleMessageListenerCont
ainer : Consumer raised exception, processing can restart if the connection factory supports
it
...
2017-03-20 12:42:39.642 INFO 24245 --- [cTaskExecutor-4] o.s.a.r.l.SimpleMessageListenerCont
ainer : Restarting Consumer: tags=[{amq.ctag-4CqrRsUP8plDpLQdNcOjDw=21-05060179}], channel=Ca
ched Rabbit Channel: AMQChannel(amqp://21-05060179#10.24.133.241:5671/,1), conn: Proxy#7ec317
54 Shared Rabbit Connection: SimpleConnection#44bac9ec [delegate=amqp://21-05060179#10.24.133
.241:5671/], acknowledgeMode=NONE local queue size=0
Generally, this is due to the consumer thread being "stuck" in user code somewhere, so it can't react to the broken connection.
If you have network issues, perhaps it's stuck reading or writing to a socket; make sure you have timeouts set for any I/O operations.
Next time it happens take a thread dump to see what the consumer threads are doing.

Spring AMQP – Is a #RabbitListener Polling under the Hood?

Summary
I want to asynchronously handle messages from an AMQP/RabbitMQ queue. I have implemented a #RabbitListener method (from spring-rabbit) for this but it seems that this listener is actually polling my queue under the hood. Is that to be expected? I would have expected the listener to somehow be notified by RabbitMQ instead of having to poll.
If it’s to be expected, can I somehow also consume messages asynchronously with Spring AMQP without polling?
What I Have Observed
When I send a message, it is correctly picked up by the listener. I still see a continuous stream of log messages which indicate that the listener continues to poll the empty queue:
…
15:41:10.543 [pool-1-thread-3] DEBUG o.s.a.r.l.BlockingQueueConsumer - ConsumeOK : Consumer: tags=[{amq.ctag-bUsK4KQN6_QHzf8DoDC_ww=myQueue}], channel=Cached Rabbit Channel: AMQChannel(amqp://guest#127.0.1.1:5672/,1), acknowledgeMode=MANUAL local queue size=0
15:41:10.544 [main] DEBUG o.s.a.r.c.CachingConnectionFactory - Creating cached Rabbit Channel from AMQChannel(amqp://guest#127.0.1.1:5672/,2)
15:41:10.545 [main] DEBUG o.s.amqp.rabbit.core.RabbitTemplate - Executing callback on RabbitMQ Channel: Cached Rabbit Channel: AMQChannel(amqp://guest#127.0.1.1:5672/,2)
15:41:10.545 [main] DEBUG o.s.amqp.rabbit.core.RabbitTemplate - Publishing message on exchange [], routingKey = [myQueue]
Sent: Hello World
15:41:10.559 [pool-1-thread-4] DEBUG o.s.a.r.l.BlockingQueueConsumer - Storing delivery for Consumer: tags=[{amq.ctag-bUsK4KQN6_QHzf8DoDC_ww=myQueue}], channel=Cached Rabbit Channel: AMQChannel(amqp://guest#127.0.1.1:5672/,1), acknowledgeMode=MANUAL local queue size=0
15:41:10.560 [SimpleAsyncTaskExecutor-1] DEBUG o.s.a.r.l.BlockingQueueConsumer - Received message: (Body:'Hello World'MessageProperties [headers={}, timestamp=null, messageId=null, userId=null, appId=null, clusterId=null, type=null, correlationId=null, replyTo=null, contentType=text/plain, contentEncoding=UTF-8, contentLength=0, deliveryMode=PERSISTENT, expiration=null, priority=0, redelivered=false, receivedExchange=, receivedRoutingKey=myQueue, deliveryTag=1, messageCount=0])
15:41:10.571 [SimpleAsyncTaskExecutor-1] DEBUG o.s.a.r.l.a.MessagingMessageListenerAdapter - Processing [GenericMessage [payload=Hello World, headers={timestamp=1435844470571, id=018f39f6-ebca-aabf-7fe3-a095e959f65d, amqp_receivedRoutingKey=myQueue, amqp_deliveryMode=PERSISTENT, amqp_consumerQueue=myQueue, amqp_consumerTag=amq.ctag-bUsK4KQN6_QHzf8DoDC_ww, amqp_contentEncoding=UTF-8, contentType=text/plain, amqp_deliveryTag=1, amqp_redelivered=false}]]
Received: Hello World
15:41:10.579 [SimpleAsyncTaskExecutor-1] DEBUG o.s.a.r.l.BlockingQueueConsumer - Retrieving delivery for Consumer: tags=[{amq.ctag-bUsK4KQN6_QHzf8DoDC_ww=myQueue}], channel=Cached Rabbit Channel: AMQChannel(amqp://guest#127.0.1.1:5672/,1), acknowledgeMode=MANUAL local queue size=0
15:41:11.579 [SimpleAsyncTaskExecutor-1] DEBUG o.s.a.r.l.BlockingQueueConsumer - Retrieving delivery for Consumer: tags=[{amq.ctag-bUsK4KQN6_QHzf8DoDC_ww=myQueue}], channel=Cached Rabbit Channel: AMQChannel(amqp://guest#127.0.1.1:5672/,1), acknowledgeMode=MANUAL local queue size=0
15:41:12.583 [SimpleAsyncTaskExecutor-1] DEBUG o.s.a.r.l.BlockingQueueConsumer - Retrieving delivery for Consumer: tags=[{amq.ctag-bUsK4KQN6_QHzf8DoDC_ww=myQueue}], channel=Cached Rabbit Channel: AMQChannel(amqp://guest#127.0.1.1:5672/,1), acknowledgeMode=MANUAL local queue size=0
…
The last log message basically repeats infinitely every second.
My Test Code
The first two methods are probably the most interesting part; the rest is mainly Spring configuration:
#Configuration
#EnableRabbit
public class MyTest {
public static void main(String[] args) throws InterruptedException {
try (ConfigurableApplicationContext appCtxt =
new AnnotationConfigApplicationContext(MyTest.class)) {
// send a test message
RabbitTemplate template = appCtxt.getBean(RabbitTemplate.class);
Queue queue = appCtxt.getBean(Queue.class);
template.convertAndSend(queue.getName(), "Hello World");
System.out.println("Sent: Hello World");
// Now that the application with its message listeners is running,
// block this thread forever; make sure, though, that the
// application context can sanely be closed.
appCtxt.registerShutdownHook();
Object blockingObj = new Object();
synchronized (blockingObj) {
blockingObj.wait();
}
}
}
#RabbitListener(queues = "#{ #myQueue }")
private void processHello(#Payload String msg,
#Header(AmqpHeaders.DELIVERY_TAG) long deliveryTag, Channel channel)
throws IOException {
System.out.println("Received: " + msg);
channel.basicAck(deliveryTag, false);
}
#Bean
public RabbitTemplate rabbitTemplate() {
return new RabbitTemplate(rabbitConnFactory());
}
#Bean
public ConnectionFactory rabbitConnFactory() {
return new CachingConnectionFactory();
}
#Bean
public SimpleRabbitListenerContainerFactory
rabbitListenerContainerFactory() {
SimpleRabbitListenerContainerFactory result =
new SimpleRabbitListenerContainerFactory();
result.setConnectionFactory(rabbitConnFactory());
result.setAcknowledgeMode(AcknowledgeMode.MANUAL);
return result;
}
#Bean
public Queue myQueue() {
return new Queue("myQueue", false);
}
#Bean
public AmqpAdmin amqpAdmin() {
return new RabbitAdmin(rabbitConnFactory());
}
}
It's not polling rabbitmq; when a message arrives asynchronously from rabbit, it is placed in an internal queue in the consumer; handing over the message to the listener thread which is blocked, waiting for the arrival.
The DEBUG message you are seeing is after the listener thread times out waiting for a new message to arrive from rabbitmq.
You can increase the receiveTimeout to reduce the logs, or simply disable DEBUG logging for the BlockingQueueConsumer.
Increasing the timeout will make the container less responsive to container stop() requests.
EDIT:
In response to your comment below...
Yes, we could interrupt the thread but it's a bit more involved than that. The receive timeout is also used to ack messages when txSize is > 1.
Let's say you only want to ack every 20 messages (instead of every message). People do that to improve performance in high volume environments. The timeout is also used to ack (the txSize is actually every n messages or timeouts).
Now, let's say 19 messages arrive then none for 60 seconds and your timeout is 30 seconds.
This will mean that the 19 messages are un-acked for a long time. With the default configuration, the ack will be sent 1 second after the 19th message arrives.
There really is little overhead in this timeout (we simply loop back and wait again), so it is unusual for it to be increased.
Also, while the container is stopped when the context is closed, people stop and start containers all the time.

Categories