I am looking for a simple delayed retried delivery on my camel route. I have configured
from("rss:" + rssUrl + "?splitEntries=false&delay=15s").bean(rssHandler) .onException(ConnectException.class).redeliveryDelay(10000).backOffMultiplier(2).maximumRedeliveries(5);
on my route, but after seeing that it did not work I learned that I must configure a dead letter channel, otherwise this configuration is ignored basically.
So I added:
errorHandler(deadLetterChannel("log:error"));
to my Java Camel config, however, I am looking for the simplest possible dead letter channel implementation that does not require me to pull in say ActiveMQ or anything like that, I'd be happy with a simple memory-based retry mechanism with no guarantees. Unfortunately, I haven't found any so far, so reaching out to here if anyone can help me pointing to a simple way to configure retries with some minimal dead letter channel component.
Using a seda component instead of a log component can give you an in-memory dead letter destination. E.g. replace "log" with "seda". However, keep in mind that once the re-deliveries are exhausted, the message will live in this queue, and hence in memory unless there is a process de-queuing it or purging messages older than specified time period configured for this queue.
https://camel.apache.org/dead-letter-channel.html
You can also, just configure your config to mark the exception as handled and give up on the message once the max re-delivery count is reached
Related
Is there a way to explicitly tell the broker to send a message to the queue's assigned dead letter queue?
I know we can configure a queue to automatically send messages to the DLQ after a certain number of re-delivery attempts. That makes perfect sense for transient errors such as database issues, network issues, etc. However in the case of a business rule error, it doesn't make sense to have that message attempt redelivery X number of times over X number of minutes before being sent to the dead letter queue when we know it's a business rule violate / malformed message, etc.
I was hoping there was a way that when we catch a business rule violation we can immediately send it to that queues dead letter queue. I know I could explicitly write the code to send it to that dead letter queue but we will have many (dozens) of configurable queues and their associated dead letter queues will be configured by our middleware team. I don't want to explicitly code the dead letter queue queue names or even configure them in my own properties. I'm hoping there is a simple way to tell the broker to immediately send the message to the dead letter queue and not attempt redelivery.
It seems like it should be something like message.deadLetter(). I feel like I must be missing something simple but I don't see any mechanism like that on the consumer, session or message.
There is no accommodation for the feature you're describing in the JMS specification. The JMS spec incorporates redelivery via things like the JMSRedelivered header and JMSXDeliveryCount property. However, it actually makes no mention of "dead-letter" destinations.
That said, such a feature might be provided by a particular JMS broker, but since you don't mention what JMS broker implementation you're using it's impossible to say whether your chosen broker implements such a feature. In any case, it would be configured and/or invoked via an implementation-specific mechanism that would be, by definition, non-portable between brokers and not available from the JMS API.
I am trying to understand the best use of RabbitMQ to satisfy the following problem.
As context I'm not concerned with performance in this use case (my peak TPS for this flow is 2 TPS) but I am concerned about resilience.
I have RabbitMQ installed in a cluster and ignoring dead letter queues the basic flow is I have a service receive a request, creates a persistent message which it queues, in a transaction, to a durable queue (at this point I'm happy the request is secured to disk). I then have another process listening for a message, which it reads (not using auto ack), does a bunch of stuff, writes a new message to a different exchange queue in a transaction (again now happy this message is secured to disk). Assuming the transaction completes successfully it manually acks the message back to the original consumer.
At this point my only failure scenario is is I have a failure between the commit of the transaction to write to my second queue and the return of the ack. This will lead to a message being potentially processed twice. Is there anything else I can do to plug this gap or do I have to figure out a way of handling duplicate messages.
As a final bit of context the services are written in java so using the java client libs.
Paul Fitz.
First of all, I suggest you to look a this guide here which has a lot of valid information on your topic.
From the RabbitMQ guide:
At the Producer
When using confirms, producers recovering from a channel or connection
failure should retransmit any messages for which an acknowledgement
has not been received from the broker. There is a possibility of
message duplication here, because the broker might have sent a
confirmation that never reached the producer (due to network failures,
etc). Therefore consumer applications will need to perform
deduplication or handle incoming messages in an idempotent manner.
At the Consumer
In the event of network failure (or a node crashing), messages can be
duplicated, and consumers must be prepared to handle them. If
possible, the simplest way to handle this is to ensure that your
consumers handle messages in an idempotent way rather than explicitly
deal with deduplication.
So, the point is that is not possibile in any way at all to guarantee that this "failure" scenario of yours will not happen. You will always have to deal with network failure, disk failure, put something here failure etc.
What you have to do here is to lean on the messaging architecture and implement if possibile "idempotency" of your messages (which means that even if you process the message twice is not going to happen anything wrong, check this).
If you can't than you should provide some kind of "processed message" list (for example you can use a guid inside every message) and check this list every time you receive a message; you can simply discard them in this case.
To be more "theorical", this post from Brave New Geek is very interesting:
Within the context of a distributed system, you cannot have
exactly-once message delivery.
Hope it helps :)
I am using Apache Camel with ActiveMQ and wanting to implement guaranteed message delivery.
I have been reading through the Camel in Action book as well as the Apache Camel Developer's Cookbook.
I am hoping someone here can advise me in my approach. I am not asking for code samples.
The way I envisioned the implementation is as follows:
1. Message is received on an endpoint
2. I inspect the message
3. I use the Wiretap pattern to drop it immediately on my "GuaranteedMessages" queue if the message asks for guaranteed delivery
4. I route the message to its proper destination
5. If no exceptions were encountered, I remove the message manually from the "GuaranteedMessages" queue
Sounds easy enough. However, I have been reading about the "Dead Letter Channel" pattern in Camel.
My understanding is using this pattern's implementation implies that instead of automatically dropping each (guaranteed) message to my "GuaranteedMessages" queue, I drop that approach and instead, I set the redelivery options (max attempts, exponential delay, redelivery delay, etc.). Then, I rely on Camel to try redelivering and simply drop it off in the dead letter channel delay if it never goes through.
Then, I would have a separate route that uses this dead letter queue as it's source. Again, it would be the same pattern. If it cannot succeed, send the message back to the dead letter queue.
Does this sound like a realistic implementation for a production system?
If instead, I decide to drop every incoming message (that needs to be guaranteed) on my own "GuaranteeMessage" queue, is it realistic to believe that I can later go and remove that specific message manually from the queue? My understanding is that I would have to manually browse the queue, iterate through any number of messages, and then consume that message manually. I am not sure how scalable such an architecture really is.
Presumably the final destination is not another ActiveMQ queue, it something that can through exception. Your idea of the wiretap is functionally the same as using the DLQ so you might as well use the Camel functionality, which works fine, to do as much work as possible.
However, two points. Firstly I would use an explicit queue to hold the messages that need retrying, rather than the DLQ, as there is only one DLQ per broker and you don't want anything else unexpected appearing on it.
Secondly if you are just going to take a message from the retry queue and resubmit it, why not just increase the retry count and delay in Camel exception handling? That way your retry queue just has messages that probably require some manual intervention. So when a message is on the retry queue, you manually check/fix whatever the underlying cause is and manually move the message to the input queue.
Let me try explaining the situation:
There is a messaging system that we are going to incorporate which could either be a Queue or Topic (JMS terms).
1 ) Producer/Publisher : There is a service A. A produces messages and writes to a Queue/Topic
2 ) Consumer/Subscriber : There is a service B. B asynchronously reads messages from Queue/Topic. B then calls a web service and passes the message to it. The webservice takes significant amount of time to process the message. (This action need not be processed real-time.)
The Message Broker is Tibco
My intention is : Not to miss out processing any message from A. Re-process it at a later point in time in case the processing failed for the first time (perhaps as a batch).
Question:
I was thinking of writing the message to a DB before making a webservice call. If the call succeeds, I would mark the message processed. Otherwise failed. Later, in a cron job, I would process all the requests that had initially failed.
Is writing to a DB a typical way of doing this?
Since you have a fail callback, you can just requeue your Message and have your Consumer/Subscriber pick it up and try again. If it failed because of some problem in the web service and you want to wait X time before trying again then you can do either schedule for the web service to be called at a later date for that specific Message (look into ScheduledExecutorService) or do as you described and use a cron job with some database entries.
If you only want it to try again once per message, then keep an internal counter either with the Message or within a Map<Message, Integer> as a counter for each Message.
Crudely put that is the technique, although there could be out-of-the-box solutions available which you can use. Typical ESB solutions support reliable messaging. Have a look at MuleESB or Apache ActiveMQ as well.
It might be interesting to take advantage of the EMS platform your already have (example 1) instead of building a custom solution (example 2).
But it all depends on the implementation language:
Example 1 - EMS is the "keeper" : If I were to solve such problem with TIBCO BusinessWorks, I would use the "JMS transaction" feature of BW. By encompassing the EMS read and the WS call within the same "group", you ask for them to be both applied, or not at all. If the call failed for some reason, the message would be returned to EMS.
Two problems with this solution : You might not have BW, and the first failed operation would block all the rest of the batch process (that may be the desired behavior).
FYI, I understand it is possible to use such feature in "pure java", but I never tried it : http://www.javaworld.com/javaworld/jw-02-2002/jw-0315-jms.html
Example 2 - A DB is the "keeper" : If you go with your "DB" method, your queue/topic customer continuously drops insert data in a DB, and all records represent a task to be executed. This feels an awful lot like the simple "mapping engine" problem every integration middleware aims to make easier. You could solve this with anything from a custom java code and multiples threads (DB inserter, WS job handlers, etc.) to an EAI middleware (like BW) or even a BPM engine (TIBCO has many solutions for that)
Of course, there are also other vendors... EMS is a JMS standard implementation, as you know.
I would recommend using the built in EMS (& JMS) features,as "guaranteed delivery" is what it's built for ;) - no db needed at all...
You need to be aware that the first decision will be:
do you need to deliver in order? (then only 1 JMS Session and Client Ack mode should be used)
how often and in what reoccuring times do you want to retry? (To not make an infinite loop of a message that couldn't be processed by that web service).
This is independent whatever kind of client you use (TIBCO BW or e.g. Java onMessage() in a MDB).
For "in order" delivery: make shure only 1 JMS Session processes the messages and it uses Client acknolwedge mode. After you process the message sucessfully, you need to acknowledge the message with either calling the JMS API "acknowledge()" method or in TIBCO BW by executing the "commit" activity.
In case of an error you don't execute the acknowledge for the method, so the message will be put back in the Queue for redelivery (you can see how many times it was redelivered in the JMS header).
EMS's Explicit Client Acknolwedge mode also enables you to do the same if order is not important and you need a few client threads to process the message.
For controlling how often the message get's processed use:
max redelivery properties of the EMS queue (e.g. you could put the message in the dead
letter queue afer x redelivery to not hold up other messages)
redelivery delay to put a "pause" in between redelivery. This is useful in case the
Web Service needs to recover after a crash and not gets stormed by the same message again and again in high intervall through redelivery.
Hope that helps
Cheers
Seb
I'm trying to move off of ActiveMQ but one feature we'd like to keep is the message group. By adding a session ID to the JMS header ActiveMQ will route all other messages on the queue with the same ID to the same consumer (our consumers may be on different machines) allowing the receiver to treat the group of messages as one unit of work.
My first thought was simply to put the session into CLIENT_ACKNOWLEDGE mode. My thinking was that if consumer A looked at the header and saw it wasn't an ID it was handling then it could just drop the message and consumer B would pick it up. I've hit several issues, including ActiveMQ's prefecting, and the more I read, the more it looks like that's not what that was designed for to begin with.
The one idea I can think of is to have a dispatch queue which would then route messages to each consumer's, for lack of a better word, sub-queue and manage matching the session IDs to the sub-queues ourselves.
Before I head down this path, which we're leery of since it'd add more complexity to the code then we'd like, is there anything I'm missing about CLIENT_ACKNOWLEDGE? Or something else entirely I should try first?
Is this what you are trying to do http://docs.redhat.com/docs/en-US/JBoss_Enterprise_Application_Platform/5/html-single/HornetQ_User_Guide/index.html#message-grouping