Is it possible to rollback async processed message in ActiveMQ? I'm consuming next message while first one is still processing, so while I'm trying to rollback the first message on another (not activemq pool) thread, I'm getting above error. Eventually should I sednd message to DLQ manually?
Message error handling can work a couple ways:
Broker-side 'redelivery policy'. Where the client invokes a rollback n number (default is usually 6 retries) of times and the broker automatically moves the message to a Dead Letter Queue (DLQ)
Client-side. Application consumes the message and then produces to the DLQ.
Option #1 is good for unplanned/planned outages-- database down, etc. Where you want automatic retry. The re-delivery policy can also be configured when the client connects to the broker.
Option #2 is good for 'bad data' scenarios where you know the message will never be able to be processed. This is ideal, because you can move the message on the 1st consumption and not have to reject the message n number of times.
When you combine infinite retry with #1 and include #2 in your application flow, you can have a robust process flow of automatic retry, and move-bad-data-out-of-the-way-quickly. Best of breed =)
ActiveMQ Redelivery policy
Related
Our system has configured to consume and send reply to the same queue, i.e., JMSDestination and JMSReplyTo are the same. I cannot change that right now.
In my integration test, if I set replyToSameDestinationAllowed=true, Camel continues to consume the reply I sent to the queue, i.e., it "captures" the source and never stop and enters a loop.
But, if I don't set it, Camel refuses to send the reply to the queue, saying this:
JMSDestination and JMSReplyTo is the same, will skip sending a reply message to itself
That causes problem for my integration test. I want to consume the message in a separate method and assert against it.
How can I stop Camel from capturing this queue, i.e., consuming only once and ignore the rest?
At the end of my route I call stop() to send reply automatically.
When receiving the second message(the reply), I see this line:
2023-01-10 14:37:22,186 DEBUG [org.apa.cam.com.jms.EndpointMessageListener]-{Camel (camel-1) thread #19 - JmsConsumer[my.queue]}-Received Message has JMSCorrelationID [ID:hostname-1673354133272-4:1:1:10:1]
Can I use this to ignore the reply? Should I stop the route? Rollback? Or what should I do?
At last I filtered out messages based on the presence of JMSCorrelationID header.
from("activemq:xxx")
.filter(simple("${header.JMSCorrelationID} == null")) // ignore reply
.to("direct:main");
Even that I don't set it in my client side code, seems that Camel will use message id to set JMSCorrelationID when sending reply if the incoming message hasn't it. If incoming message already has JMSCorrelationID, Camel will not change it, and will copy that value to the reply.(I guess that if you manually set JMSCorrelationID in client side, Camel will stop setting it for you).
So basically, message without JMSCorrelationID means it's new message which hasn't passed through my client application. I think only client side should set it, especially in my case where original message and replies are put into the same queue, where client needs a mean to filter out replies.
Also, I find that receiving can specify a message collector stating the field you want to filter. For example:
QueueReceiver receiver = jmsSession.createReceiver(myQueue, "JMSCorrelationID='" + correlationId + "'");
This is useful when you know the correlationId. But in my case (#QuarkusIntegrationTest which is a black box test), this cannot be used.
But after doing that, in my integration test Camel still "captures" the consuming and will not let another method to consume the message properly(the other method never receives anything) when I run the whole test class(with other test cases); when running individually, this test case passes. So at last I disabled the test case.
Seems that after filtering out the message, Camel behaves exactly same as if I called .stop(), executing the callback (sending reply); and will send the original message to reply queue, in my case, the original queue, so it's looping and never let go. Even I enable duplicate check, it still captures.
At the very last, we separate the queues so even capturing is happening, it does not matter any more.
We have a SpringBoot service implementation in which we are using delayed messaging with the below setup:
Initial queue (Queue 1) that gets the message has a TTL set, the queue also has a dead letter exchange mentioned with a specific dead letter routing key.
Another queue (Queue 2) is bound to the DLX of the previous queue with the routing key which is set as the dead letter routing key
A consumer listens to the messages on Queue 2.
The delayed messaging seems to work as expected but I am seeing an issue with messages getting redelivered in certain scenarios.
If I have a debug point in my consumer and keep the message just after reading it for some time then once the current message has been processed consumer gets another message which has the below properties:
Redelivered property as true.
Property deliveryAttempt as 1
Only the first message has an x-death header and redelivered messages do not seem to have it.
The attempt to deliver the message is done 3 times as many times as I pause the consumer using the debug point each time after reading each redelivered message.
My understanding was that the acknowledgment mode by default is AUTO so once the consumer has read the message then it would not be redelivered?
I have tried using maxAttempts=1 property but does not seem to help.
I am using the spring cloud stream to create the consumers and the queues.
I used to run into this issue when the message processing in the consumer failed (exception thrown). In this case, if you have DLQ configured, make sure to add the following configuration as well so the failed message will be routed to the DLQ not the original listening queue.
"
rabbit:
autoBindDlq: true
"
Otherwise if you don't set up the DLQ, configure "autoBindDlq" to "false".
I have configured ActiveMQ redelivery plugin as follows (with max 4 redeliveries)
<redeliveryPlugin fallbackToDeadLetter="true" sendToDlqIfMaxRetriesExceeded="true">
<redeliveryPolicyMap>
<redeliveryPolicyMap>
<defaultEntry>
<redeliveryPolicy initialRedeliveryDelay="5000" maximumRedeliveries="4" redeliveryDelay="10000"/>
</defaultEntry>
</redeliveryPolicyMap>
</redeliveryPolicyMap>
</redeliveryPlugin>
If a client fails to send ACK the message is redelivered. So far so good... However, the max redeliveries are completely ignored by the broker and it keeps redelivering the messages "infinitely" many times. Also the message is never moved to DLQ.
I also tried using:
?jms.redeliveryPolicy.maximumRedeliveries=4 on the connection URI (STOMP Connector), but also with no luck.
Any help is most appreciated!
For a STOMP client I would assume that the broker will not consider the message as being delivered unless the client either ACKs it or NACKs it otherwise it must assume that it never made it to a client and therefore treats it as always having a delivery count of zero. The broker redelivery plugin keys off the message's delivery count so if the message is treated as not having been delivered which in this case it likely is then it will take no action on the message.
I have set up an errorHandler in a Camel route that will retry a message several times before sending the message to a dead letter channel (an activemq queue in this case). What I would like is to see an ERROR log when the message failed to be retried the max number of times and was then sent to the dead letter queue.
Looking at the docs for error handling and dead letter channels, it seems that there are 2 options available on the RedeliveryPolicy: retriesAttemptedLogLevel and retriesExhaustedLogLevel. Supposedly by default the retriesExhaustedLogLevel is already set at LoggingLevel.ERROR, but it does not appear to actually log anything when it has expended all retries and routes the message to the dead letter channel.
Here is my errorHandler definition via Java DSL.
.errorHandler(this.deadLetterChannel(MY_ACTIVE_MQ_DEAD_LETTER)
.useOriginalMessage()
.maximumRedeliveries(3)
.useExponentialBackOff()
.retriesExhaustedLogLevel(LoggingLevel.ERROR)
.retryAttemptedLogLevel(LoggingLevel.WARN))
I have explicitly set the level to ERROR now and it still does not appear to log out anything (to any logging level). On the other hand, retryAttemptedLogLevel is working just fine and will log to the appropriate LoggingLevel (ie, I could set retryAttemptedLogLevel to LoggingLevel.ERROR and see the retries as ERROR logs). However I only want a single ERROR log in the event of exhaustion, instead of an ERROR log for each retry when a subsequent retry could potentially succeed.
Maybe I am missing something, but it seems that the retriesExhaustedLogLevel does not do anything...or does not log anything if the ErrorHandler is configured as a DeadLetterChannel. Is there a configuration that I am still needing, or does this feature of RedeliveryPolicy not execute for this specific ErrorHandlerFactory?
I could also set up a route to send my exhausted messages that simply logs and routes to my dead letter channel, but I would prefer to try and use what is already built into the ErrorHandler if possible.
Updated the ErrorHandler's DeadLetterChannel to be a direct endpoint. Left the 2 logLevel configs the same. I got the 3 retry attempted WARN logs, but no ERROR log telling me the retries were exhausted. I did, however, set up a small route listening to the direct dead letter endpoint that logs, and that is working.
Not a direct solution to my desire to have the ERROR log work for the exhaustion, but is an acceptable workaround for now.
Please try with this code:
.errorHandler(deadLetterChannel("kafka:sample-dead-topic")
.maximumRedeliveries(4).redeliveryDelay(60000)
.retriesExhaustedLogLevel(LoggingLevel.WARN)
.retryAttemptedLogLevel( LoggingLevel.WARN)
.retriesExhaustedLogLevel(LoggingLevel.ERROR)
.logHandled(true)
.allowRedeliveryWhileStopping(true)
.logRetryStackTrace(true)
.logExhausted(true)
.logStackTrace(true)
.logExhaustedMessageBody(true)
)
retry is configured for 1 minute interval.
Camel application logged the errors for evry retry with the detailed information.
We are using MQ JMS standalone client application (NO app server) to consume WebSphere MQ messages. Our Queue definition is as follows:
APP_QUEUE1 - (QA, PUT enabled)
APP_QUEUE1.CL - (QL and target of above APP_QUEUE1)
APP_QUEUE1_BOQ - (QA and BOQNAME of APP_QUEUE1.CL, PUT enabled)
APP_QUEUE1_BOQ.CL - (QL and target of above APP_QUEUE1_BOQ )
BOTHERH of APP_QUEUE1 = 3.
With above set up, when exception occurs for the first time, I am getting exception saying backout queue is not defined and attempt to add to dead letter queue also fails. Can someone explain why message is not getting recede to main queue (APP_QUEUE1) even though BOTHRESH is 3.
My understanding is, in case of exception, message will be recede to APP_QUEUE1 3 times and after that it will be routed to back out queue. If back out queue is full or fails then only message is added to dead letter queue.
Can someone please answer if there's anything wrong with queue definition ? Or something needs to be done in the application code ?