Scheduled Retrying for an associated JMS Message - java

I have a single threaded message listener that listens for the incoming messages.The received messages are persisted in a database as they are recieved.
There is a message A and associated message B follows it with a reference it.In Some Odd occurences, B arrives before A. Now in this case, There has to be 3retries after some 'x' equal intervals to see if A has arrived and then persists the association.
As the message listener is single threaded if we put the thread to sleep the whole system would be affected. So there has to be a separate thread retrying.
Can we use Quartz Job Scheduler for this purpose to avoid handling multiThreading issues and to have a PERSISTENT STORE of the in any of the following 2 ways,
Schedule a Job in Quartz for 3 times and keep track of a flag in the JobDataMap to check if previous retries succeeds then return without doing anything
OR
2.Schedule a Job to retry once and then if the retry fails schedule the same job after a few seconds.
Can quartz be used only for repetitive jobs and not have some state information span across job or Is there any other better way to do this.

You should configure your JMS provider to set the redelivery delay on your message queue. In your code, you call context.setRollbackOnly to abort a message that does not pass prerequisites check.
In that case, the code execution scenario becomes:
consume "B", check for prerequisite and detect A is lacking
roll back the transaction and the message returns to the queue, it will be re-delivered after the configured delay
consume and process next message "A"
after the delay, the MDB consumes and processes again "B" with success

Related

Sending a message after several other messages have completed without utilizing an external store?

I have an application which should use JMS to queue several long running tasks asynchronously in response to a specific request. Some of these tasks might complete within seconds while others might take a longer time to complete. The original request should already complete after all the tasks have been started (i.e. the message to start the task has been queued) - i.e. I don't want to block the request while the tasks are being executed.
Now, however, I would like to execute another action per request once all of the messages have been processed successfully. For this, I would like to send another message to another queue - but only after all messages have been processed.
So what I am doing is a bit similar to a reply-response pattern, but not exactly: The responses of multiple messages (which were queued in the same transaction) should be aggregated and processed in a single transaction once they are all available. Also, I don't want to "block" the transaction enqueuing the messages by waiting for replies.
My first, naive approach would be the following:
When a requests comes in:
Queue n messages for each of the n actions to be performed. Give them all the same correlation id.
Store n (i.e. the number of messages sent) in a database along with the correlation id of the messages.
Complete the request successfully
Each of the workers would do the following:
Receive a message from the queue
Do the work that needs to be done to handle the message
Decrement the counter stored in the database based on the correlation id.
If the counter has reached zero: Send a "COMPLETED" message to the completed-queue
However, I am wondering if there is an alternative solution which doesn't require a database (or any other kind of external store) to keep track whether all messages have already been processed or not.
Does JMS provide some functionality which would help me with this?
Or do I really have to use the database in this case?
If your system is distributed, and I presume it is, it's very hard to solve this problem without some kind of global latch lock like the one you have implemented. The main thing to notice is that "tasks" have to signal within "global storage" that they are over. Your app is essentially creating a new countdown latch lock instance (identified by CorrelationID) each time a new request comes by inserting a row in a db. Your tasks are "signaling" the end of jobs by counting that latch down. The job which ends holding a lock has to clean the row.
Now global storage doesn't have to be a database, but it still has to be some kind of global access state. And you have to keep on counting. And if only thing you have is a JMS you have to create latch and count down by sending messages.
The simplest solution which comes to a mind is by having each job sends a TASK_ENDED message to a JOBS_FINISHED queue. TASK_ENDED message stands for: "task X triggered by request Y with CorrelationID Z has ended" signal. Just as counting down in db. Recipient of this q is a special task whose only job is to trigger COMPLETED messages when all messages are received for a request with given correlation id. So this jobs is just reading messages sequentially. And counts each unique correlation id which it encounters. Once it has counted to an expected number it should clear that counter and send COMPLETED message.
You can encode number of triggered tasks and any other specifics within JMS header of messages created when processing request. For example:
// pretend this request handling triggers 10 tasks
// here we are creating first of ten START TASK messages
TextMessage msg1 = session.createTextMessage("Start a first task");
msg1.setJMSCorrelationID(request.id);
msg1.setIntProperty("TASK_NUM", 1);
msg1.setIntProperty("TOTAL_TASK_COUNT", 10);
And than you just pass that info to a TASK_ENDED messages all the way to a final job. You have to make sure that all messages sent to an ending job are received to same instance of a job.
You could go from here by expanding idea with publish subscribe messaging, and error handling and temporary queues and stuff like that, but that is becoming very specific of you needs so I'll end here.

RabbitMQ how to split jobs to tasks and handle results

I have the following use case on a Spring-based Web application:
I need to apply the Competing Consumers EIP with the following twists: the messages in the queue are actually split tasks belonging to the same job. Therefore, I need to properly track when all tasks of a job get completed and their completion status in order to save the scenario either as COMPLETED or FAILED, log the outcome and notify by e.g. e-mail the users accordingly
So, given the requirements I described above, my question is:
Can this be done with RabbitMQ and if yes how?
I created a quick gist to show a very crude example of how one could do it. In this example, there is one producer and 2 consumers, 2 queues, one for sending by the producer ("SEND"), consumed by the consumers, and vice versa, consumers publish to the "RECV" queue and is consumed by the producer.
Now bear in mind this is a pretty crude example, as the Producer in that case send simply one job (a random amount of tasks between 0 and 5), and block until the job is done. A way to circumvent this would be to store in a Map a job id and the number of tasks, and every time check that the number of tasks done reported per job id.
What you are trying to do is beyond the scope of RabbitMQ. RabbitMQ is for sending and receiving messages with ability to queue them.
It can't track your job tasks for you.
You will need to have a "Job Storage" service. Whenever your consumer finishes the task, its updates the Job Storage service, marking task as done. Job storage service knows about how many tasks are in the job, and when last task is done, completes jobs as succeeded. There in this service, you will also implement all your other business logic, such as when to treat job as failed.

Spring AMQP take action on message timeout

I am using Spring AMQP with asynchronous messaging. My model assumes that there are two applications A and B, both producers and consumers.
A sends job request to B and starts listening.
B are listening for job request, and when it comes, starts job and periodically sends progress messages to A.
B sends job finish message to A after job is finished.
A consumes progress messages until job finish message comes, then A exists.
I am using #RabbitListener on class level and #RabbitHandler on method level, for message consuming. Everything works nice and design is clean, I like Spring's solution.
My problem is - I have no idea how to detect, and how to act, when A is expecting to receive progress message from B (any message) and it's not coming in. Is there any timeout for such cases? If so, how to implement callback method?
I found some timeout settings, but they usually works for connection itself, or only when using RPC pattern (one request one response).
Desired solution is - A should receive progress message every minute. If no progress message is consumed for, say, 3 minutes, I want to cancel the job.
When using async consumers, there's no mechanism to generate an event if a message is not received within some time period.
You can schedule your own task though and cancel/reschedule the task when a message arrives.
Use a TaskScheduler with
future = schedule(myRunnable, new Date(System.currentTimeMillis() + 180000));
use future.cancel() when a message arrives.

SimpleMessageListenerContainer bulk message processing

I have a stream of incoming data that is sent to RabbitMQ as individual messages.
I want to send these to a service that requires a batch of messages. I need to send the request to the service when I either have a batch of 1000 messages or when 5 seconds have expired. Is this possible using SimpleMessageListenerContainer?
The SimpleMessageListenerContainer supports transactions, however this won't help with the 5 second timeout. I did look at the method doReceiveAndExecute(BlockingQueueConsumer consumer) and "receiveTimeout", but as this variable is inside the transaction loop I could end up waiting 5 seconds per message (1000*5 sec= 83 min).
I currently have a channel aware listener that batches the messages into a bulk processor that will manage my timeouts and queue length. The SimpleMessageListenerContainer is set to manual ack. However as the listener returns before the message has actually been sent to the service I occasionally have issues when I do come to ack the message as the channel has been closed.
I have thought about writing my own ListenerContainer that sends the whole BlockingQueueConsumer to the Listener. Is this the only solution or has anyone managed to do something similar already?
You can use a ChannelAwareMessageListener, set acknowledgeMode=MANUAL; accumulate the deliveries in the listener; start a timer (scheduled task) to execute in +5 seconds and keep a reference to the channel. When a new delivery arrives, cancel the task, add the new delivery to the collection.
When 1000 deliveries arrive (or the scheduled task fires); invoke your service; then use channel.basicAck() (multiple) to ack the processed messages.
You'll have some race conditions to deal with but it should be pretty easy. Perhaps another queue of batches would be easiest with some other thread waiting for batches to arrive in that queue.
EDIT
As of 2.2, the SimpleMessageListenerContainer supports delivering batches of messages natively - see Batched Messages.
Starting with version 2.2, the SimpleMessageListeneContainer can be use to create batches on the consumer side (where the producer sent discrete messages).
Set the container property consumerBatchEnabled to enable this feature. deBatchingEnabled must also be true so that the container is responsible for processing batches of both types. Implement BatchMessageListener or ChannelAwareBatchMessageListener when consumerBatchEnabled is true. See #RabbitListener with Batching for information about using this feature with #RabbitListener.

Trigger Java process detecting record meeting specific condition

I am working on a Java Project using Spring. Database is Oracle
We have a Message listener configured in the container attached to a remote queue. Following are the steps we do once the onMessage gets triggered
Parse the message
insert the message in the database.
Based on the content of the message do some additional process involving file processing, DB insert/update etc..
If the message received in the queue is good and due to some issue on our side, we were unable to process it, We do not have a way to reprocess the message after waiting for some time [assuming the issue which triggered the error gets resolved].
Following is the new design proposed.
1. Parse the message
2. insert the message in the database with a flag. say "false" [The flag only gets changed when the message gets successfully processed.]
A New process to be added which queries the database for record flagged as "false" [one at a time], process it and update the flag to true. If the processing fails, retry configurable amount of time to process the same record. The process can die if there are no more records to process or have exhausted the retry count...
Please suggest a reasonable design which process the message at the earliest possible time detecting a record flagged as "false'
Trigger a java process using Database Trigger ? [DBA is against it]
Is there a way we can trigger the process in the onMessage method after the Database insert is done and without blocking the retrieval of next message ?
Scheduling a job which polls the database at regular interval ?
This can be done in Spring with the #Async annotation. This annotation allows to launch a task asynchronously after the completion of the insert.
This means the thread that made the insert will not block while the #Async operation runs, and it will return immediately.
Depending on the task executor configured, the #Async will get executed in a separate thread, which is what you need in this case. I would suggest to start with SimpleAsyncTaskExecutor, see here what are the different task executors available.
Check also this Spring tutorial for further info.
Since you are already using Spring Integration, why not just send the enhanced message to a new channel and process it there? If the channel is a QueueChannel the processing will be ansynchronous. There are retry features available as well.

Categories