Hy all,
in the software I'm developing, I have different camel routes that work on data, that is (in this case) loaded from an imap server using the camel-mail component.
Each of those routes does something with the data and then gives the data to the next route. They are dynamically configured at runtime.
In between those routes is an embedded ActiveMQ server which is used by each route to load the data from and save the data to (for the next route to pick it up).
Because of this structure I'm having a special case with the camel-mail consumer.
When loading a mail and sending it to the first ActiveMQ queue, it is immediatelly deleted/marked as read (depending on the settings on the mail consumer), but the actual processing of the mail has not concluded yet, as the next routes still have to process it.
This is a simplified view:
from("imaps://imap.server.com?...")
// Format mail in a way the other routes understand
.to("activemq:queue1"); // After this the mail is delete on the imap server
from("activemq:queue1")
// do some processing
.to("activemq:queue2");
from("activemq:queue2")
// Do some final processing
.to("..."); // NOW the mail should be delete on the imap server
This issue is even more a problem with the error handling I do.
Ever route in this "chain" sends failed exchanges to a deadLetterQueue on the ActiveMQ server. This way there is one error handling route, which picks up the failed exchanges and deals with them, on matter where it crashed.
In case there is a problem I want the email on the imap server to be handled differently (maybe even do nothing an try again on the next poll)
As camels InOut MEP returns the exchange to the (mail)consumer when the route ends i.e. when the exchange is given to the queue, I can't use the consumer to delete the mails after the whole process has ended.
Unfortunatelly I also don't see a delete option on the mail producer (which makes sense I guess, because its not how imap works).
I could also use smtp for this if thats necessary.
Does anybody have an idea how I could achieve this using no other connector then the camel component to connect to the mail server?
Greets and thanks in advance
Chris
Edit:
Adding the parameter "exchangePattern=InOut" to the jms queues (.to("activemq:queue1?exchangePattern=InOut")) lets the mail component wait for the whole process to finish.
The problem with that is, that we lose the big advantage with ActiveMQ that all routes are independent of each other. This is important so we are don't run into issues with consuming the mail when a later route takes a long time to process, which is very likely to happen.
So idealy we find a solution, where the mail is deleted without any component waiting for something to finish
Related
Does ActiveMQ support Idempotent producer? I know Camel has an idempotent consumer pattern to detect and handle duplicate messages, but I'm wondering if this can be prevented at the source (producer).
Here is a little back ground. I have applications that are horizontally scaled accessing same database. There is one particular table that maintains status of a particular process. These horizontal applications should be able to read the status and invoke another process, however only one of them should be able to invoke it. This application periodically polls the data base and posts a message to a messaging broker, once the required condition is met. But I want one of the load balancing application should be able to post the message.
One crude approach I'm thinking is...
On Machine 1:
Read the database for checking if the necessary condition is met.
Before posting message to the broker, write a record to another status table with a unique key that identifies the process and commits. If this operation fails due to unique key constraint violation, it means process on another machine succeeded in posting the message.
Post the message to the broker
If the message posting is failed, for some reason, perform delete operation on the status table based on the unique key/ primary key.
The same operation can be performed by same application running on machine 2 , 3, 4 etc.
Below is one pitfall I quickly notice with this approach.
Assuming that Machine 1 is able to complete step 2 but failed performing step 3 and continues with step 4. Meanwhile Machine 2, when it failed at step 2, will move on with out attempting to read the status again and post the message.
To address this, I need to put retry on step 3, until the message is successfully posted to broker.
Another option is to use https://camel.apache.org/components/latest/eips/idempotentConsumer-eip.html pattern. But this is essentially a filter at consumer side. Though this will serve my purpose, is there a similar approach out of box available on message publishing side.
I wonder, if this approach is even correct or any better alternative approach, or any existing libraries that can be used to perform locking kind of mechanism across JVM either local or remote.
It's not clear what version of ActiveMQ you're using (i.e. ActiveMQ 5.x or ActiveMQ Artemis) so I'll try to address this issue for both.
ActiveMQ 5.x doesn't have any built-in support for detecting duplicates sent from clients. However, you could potentially implement this feature using a broker plugin. The only challenge I see here is configuring, managing, and monitoring the cache of duplicate IDs.
ActiveMQ Artemis does have built in support for detecting duplicates sent from clients. You can read more about duplicate detection in the documentation. Since the broker supports this behavior natively it provides clean configuration, management, and monitoring.
In either case you'll need to set a special header on each message with "a unique key that identifies the process" just like you would for your potential database solution. Furthermore, using the broker as the duplicate detector is much simpler overall.
If you're currently using ActiveMQ 5.x but want to move to ActiveMQ Artemis in order to use the duplicate detection feature you don't necessarily need to update your clients as ActiveMQ Artemis fully supports the OpenWire protocol used by 5.x clients. You should just be able to point them to the new instance of ActiveMQ Artemis and have everything work.
We are attempting to use the aws sqs temporary queues library for synchronous communication between two of our apps. One app utilises a AmazonSQSRequester while the other uses a AmazonSQSResponder - both are created using the builders from the library and wired in as Spring beans in app config. Through AWS console we create an SQS queue to work as the 'host queue' required for the request/return pattern. The requesting app sends to this queue and the responding app uses a SQSMessageConsumer to poll the queue and pass messages to the AmazonSQSResponder. Part of how (I'm fairly sure) the library works is that the Requester spins up a temporary SQS queue (a real, static one), then sends that queue url as an attribute in a message to the Responder, which then posts its response there.
Communications between the apps work fine and temporary queues are automatically created. The issue is that when the Requester app shuts down, the temporary queue (now orphaned and useless) persists when it should be cleared up by the library. Information on how we're expecting this clean up to work can be found in this aws post:
The Temporary Queue Client client addresses this issue as well. For each host queue with recent API calls, the client periodically uses the TagQueue API action to attach a fresh tag value that indicates the queue is still being used. The tagging process serves as a heartbeat to keep the queue alive. According to a configurable time period (by default, 5 minutes), a background thread uses the ListQueues API action to obtain the URLs of all queues with the configured prefix. Then, it deletes each queue that has not been tagged recently.
The problem we are having is that when we kill the Requester app, unexplained messages appear in the temporary queue/response queue. We are unsure which app puts them there. Messages in queue prevent the automagic cleanup from happening. The unexplained messages share the same content, a short string:
.rO0ABXA=
This looks like it was logged as a bug with the library: https://github.com/awslabs/amazon-sqs-java-temporary-queues-client/issues/11. Will hopefully be fixed soon!
I understand JMS as depicted by the following diagram:
(source: techhive.com)
Is there any way for me to access the underlying database using JMS or some other thing? Further, the JDBC connections that the JMS server maintains, can I add new connections in it so as to access other databases also and do CRUD operations on them? If yes, how?
Where did you get this from?
Normally JMS is used to send messages to queue (or topics). You have message producers that push messages in the queue and message consumers consume them and process it.
In your exemple it seems that you have multiple queues. One for the messages that need to be processed, and one for each client to retrieve the result the processing of its messages.
With JMS Server you don't necessarily have a database behind. Everything can stay in memory, or can be written to files. You will need database server behind only if you configure your JMS server to be persistent (and to assure that even if server/application crash your messages won't be lost). But in that case you will never have to interact with the database. Only the JMS server will and you will interact with the JMS server sending and consuming messages.
Scenario 1 :
Setup a JMS Queue in your server
Java code to send Messages to Producer
Create a JMS Producer, which when invoked, should receive the email data (subject, body, to , cc etc) and post it to the Queue setup in step 1
Create a JMS Consumer, which subscribes to the Queue created in Step 1, and its onMessage should call the JavaMail API to send the email.
Scenario 2 :
Directly call the JavaMail API to send the email.
I know about how to use and what JMS and Java Mail are doing.Thing is why we have to go from Scenario 2 to Scenario 1 for sending mails.Initially we did Scenario 2.Now we are using Scenario 1.From Different parts of the Big Application are sending mails so we use JMS Queue ,there will be Consumer of Queue from there sending mails.Please help me to understand.
You would use this mechanism in a large application for 2 reasons:
1) You don't want your clients to have to wait for the mail to be sent.
2) You don't want to lose mails if you lose connectivity to your mail server for any reason.
You would do this if you don't have a relyable MTA near your local machine but need to be sure your mail will be send. For example if there is a network outage but you rely on Java Mail to send your mail without additional logic, your mail will not be send at all.
Using JMS you can reschedule the mail for transfer as soon as the real MTA will become available again.
Besides:
the conversation with the mail provider (SMTP und POP3) is
asynchronous and close to the JMS/MDB api. So why should i use a
different API than JMS ?
You can keep the mail handling in one transaction, together with some database changes other activities. I remember too many Spring .. sic' projects, where the customer demmands for a atomic operation, that included a state change in a db ;-)
Image, the messages you send become more compulsory and you have to connect to a X400 service. Simply think of the slight code change (and the change of the RA) and you will discover to met the right architectual descision.
I have a MDB deployed on Jboss that gets messages from a Websphere MQ queue,looking in each message header for GroupId and Sequence information.Once it gets all sequences for a group,puts together the payload of each message it received to form one big message and sends it to another system.
Now the MDB will be deployed in a Websphere Application Server 7 clustered environment and I don't know for sure if there is any caching/configuration available to gather all message sequences for a group by one instance of the cluster(otherwise,if one instance receives some message parts and another instance the rest,in the end the MDB won't be able to put together one big message)
I read that the jms-ra resource adaptor can be configured with con.sun.genericra.loadbalancing.selector= (e.g JMSType = 'Instance1' and so on for the other instances)
The JMSType header should be present in the message and should be 'Instance1', for instance 1 to process this message.
But I'm not sure if the system that will put the messages in the queue from where the MDB picks them up will send such information in their message headers.
Is there a way to configure the cluster to achieve this?
When working in cluster environment, MDBs work independently. There are several ways to achieve syncronization.
1) You can use selectors to divide message flows between cluster nodes. Here is the docs http://publib.boulder.ibm.com/infocenter/wmqv7/v7r0/index.jsp?topic=%2Fcom.ibm.mq.csqzal.doc%2Ffg20180_.htm
The main problem is that selectors need some info in message properties to make their job. Somebody must put them there.
2) You can make syncronization on "shared" data collector, such as DB. You will put received messages there. Further processing can be made async or on last message come basis.
3) You can make a "proxy" yourself. You can make additional "internal" queue. Take messages by several MDB from external queue, anylize them and put properties needed for point 1. Then put messages in internal queue and read them as in point 1 using selectors by different nodes.