Trigger Java process detecting record meeting specific condition - java

I am working on a Java Project using Spring. Database is Oracle
We have a Message listener configured in the container attached to a remote queue. Following are the steps we do once the onMessage gets triggered
Parse the message
insert the message in the database.
Based on the content of the message do some additional process involving file processing, DB insert/update etc..
If the message received in the queue is good and due to some issue on our side, we were unable to process it, We do not have a way to reprocess the message after waiting for some time [assuming the issue which triggered the error gets resolved].
Following is the new design proposed.
1. Parse the message
2. insert the message in the database with a flag. say "false" [The flag only gets changed when the message gets successfully processed.]
A New process to be added which queries the database for record flagged as "false" [one at a time], process it and update the flag to true. If the processing fails, retry configurable amount of time to process the same record. The process can die if there are no more records to process or have exhausted the retry count...
Please suggest a reasonable design which process the message at the earliest possible time detecting a record flagged as "false'
Trigger a java process using Database Trigger ? [DBA is against it]
Is there a way we can trigger the process in the onMessage method after the Database insert is done and without blocking the retrieval of next message ?
Scheduling a job which polls the database at regular interval ?

This can be done in Spring with the #Async annotation. This annotation allows to launch a task asynchronously after the completion of the insert.
This means the thread that made the insert will not block while the #Async operation runs, and it will return immediately.
Depending on the task executor configured, the #Async will get executed in a separate thread, which is what you need in this case. I would suggest to start with SimpleAsyncTaskExecutor, see here what are the different task executors available.
Check also this Spring tutorial for further info.

Since you are already using Spring Integration, why not just send the enhanced message to a new channel and process it there? If the channel is a QueueChannel the processing will be ansynchronous. There are retry features available as well.

Related

Executorservice exception handling in java

I am using executor service feature of Java. I want to understand the design perspective.
If something goes wrong in one of the batch what will be best approach to handle it?
I am creating fixed thread pool as,
ExecutorService pool = Executors.newFixedThreadPool(10);
Also I am using invokeall() to invoke all callable which is returning future object.
Here is my scenario -
I have 1000 records coming from xml-file and I wanted to save into DB.
I created batch of 10, each batch containing 100 records.
Batches started processing(say batch1, batch2, batch3... batch10) and lets say one of batch(batch7) came across error for a particular record while parsing the record from xml and it could not save into DB.
So my question is how I can handle this situation ?
How I can get/store failed batch information (batch7 above) ?
I mean, if there is any error in any of batch should i stop all other batches ?
Or where i can store information for failed batch and how I can take it for further processing once error corrected ?
The handler that has the logic to process the records should have an variable that stores the batch number.
The handler ideally should have a finite retry logic for few set of database errors.
Once the retry counts exhausts, it warrants a human intervention and it should exit throwing exceptions and the batch number . The executor should ideally should call shutDown . If your logic demands to stop the process immediately , then you should call shutDownNow . Ideally your design should be resistive to such failures and let other batches continue its work even if one fails. Hope it helped you
You should use CompletableFuture to do this
Use CompletableFuture.runAsync( ) to start a process asynchronous, it returns a future. On this future, you can use thenAccept(..) or thenRun(..) methods to do something when process is complete.
There is also a method, exceptionally(..) to do something when an exception is thrown.
By default, it uses a default executor service to do this async, but you can use your own if necessary.
So my question is how I can handle this situation ?
It all depends on your requirement.
How I can get/store failed batch information (batch7 above) ?
You can store it either in a file or database.
I mean, if there is any error in any of batch should i stop all other batches ?
This depends on your business use case. If you have requirement to stop batch processing even with single batch failure, you have to stop next batches. Otherwise you can continue with next set of batches.
Or where i can store information for failed batch and how I can take it for further processing once error corrected ?
This also depends on your requirement & design. You may have to inform the source about problematic XML file so that they can correct the file and sent it back to you. Once you receive the new copy, you have to push new file for processing. It can be manual or automated which depends on your design.

How to execute a particular method in java before websphere goes down?

I have a job in java application which is reading data from Oracle database using spring JDBC after every 5 mins. The java application is running on WebSphere Application Server. It loads records with status 'X', after loading records it changes status of records to 'Y'. We are reading 10k records at a time and supplying 1k records to each thread within 10 threads for certain Processing.
After processing each record, the record state gets changed to 'Z'. Now if something goes wrong while processing records like outOfMemory error and WebSphere goes down, the record state remains 'Y'.
So when next time Server starts, the job starts reading records with status 'X'. But the records which are unprocessed with status 'Y' will never get loaded now. So is there any way to call a method while WebSphere goes down? Within which i can write a piece of code to make the status of unprocessed records to 'X', so that they can get picked next time server starts.
If the application has encountered an OutOfMemoryError there is really no reliable way for you to make sure that some code gets executed before going down (actually, the OutOfMemoryError will not actually make the process die by itself, but it doesn't matter - if you're out of memory you can't be sure that you will be able to do anything at all in that process).
What you should to do is get rid of the 'Y' state. Just make sure that the job that reads the items doesn't get to execute more than once (see below). You should then be able to just read the items, send them off for processing, and just set the state to 'Z' when you're done (preferably in the same transaction as the processing of each item).
Now, you don't specify how your job is kicked off every five minutes, so I'm just assuming you're using Spring's scheduling functionality for that. If this is the case, the job will never get fired more than once as long as it's still running. This means that your job needs to keep track of the items it's sent off and wait for them to finish before exiting. This can be done using an ExecutorCompletionService. Send each subtask (i.e. chunk of 1k records) to the same ExecutorCompletionService and poll for finished tasks as long as there are more tasks left. When all subtasks have returned you can safely exit the parent job.
Another way of doing this (if the 'Y' state is required for some particular reason) would be to check for 'Y' records on startup, e.g. in a method annotated with #PostConstruct
As you tagged you question with spring, you should simply use a singleton bean with a destroy-method method : it will be destroyed when the application context is closed and that happens when the application itself is stopped.
XML : <bean class="..." destroy-method="destroy"/>
JavaConfig :
#Bean(destroy-method = "destroy")
public class ... {
public void destroy() {
// do you cleanup
}
}

Handling Failed calls on the Consumer end (in a Producer/Consumer Model)

Let me try explaining the situation:
There is a messaging system that we are going to incorporate which could either be a Queue or Topic (JMS terms).
1 ) Producer/Publisher : There is a service A. A produces messages and writes to a Queue/Topic
2 ) Consumer/Subscriber : There is a service B. B asynchronously reads messages from Queue/Topic. B then calls a web service and passes the message to it. The webservice takes significant amount of time to process the message. (This action need not be processed real-time.)
The Message Broker is Tibco
My intention is : Not to miss out processing any message from A. Re-process it at a later point in time in case the processing failed for the first time (perhaps as a batch).
Question:
I was thinking of writing the message to a DB before making a webservice call. If the call succeeds, I would mark the message processed. Otherwise failed. Later, in a cron job, I would process all the requests that had initially failed.
Is writing to a DB a typical way of doing this?
Since you have a fail callback, you can just requeue your Message and have your Consumer/Subscriber pick it up and try again. If it failed because of some problem in the web service and you want to wait X time before trying again then you can do either schedule for the web service to be called at a later date for that specific Message (look into ScheduledExecutorService) or do as you described and use a cron job with some database entries.
If you only want it to try again once per message, then keep an internal counter either with the Message or within a Map<Message, Integer> as a counter for each Message.
Crudely put that is the technique, although there could be out-of-the-box solutions available which you can use. Typical ESB solutions support reliable messaging. Have a look at MuleESB or Apache ActiveMQ as well.
It might be interesting to take advantage of the EMS platform your already have (example 1) instead of building a custom solution (example 2).
But it all depends on the implementation language:
Example 1 - EMS is the "keeper" : If I were to solve such problem with TIBCO BusinessWorks, I would use the "JMS transaction" feature of BW. By encompassing the EMS read and the WS call within the same "group", you ask for them to be both applied, or not at all. If the call failed for some reason, the message would be returned to EMS.
Two problems with this solution : You might not have BW, and the first failed operation would block all the rest of the batch process (that may be the desired behavior).
FYI, I understand it is possible to use such feature in "pure java", but I never tried it : http://www.javaworld.com/javaworld/jw-02-2002/jw-0315-jms.html
Example 2 - A DB is the "keeper" : If you go with your "DB" method, your queue/topic customer continuously drops insert data in a DB, and all records represent a task to be executed. This feels an awful lot like the simple "mapping engine" problem every integration middleware aims to make easier. You could solve this with anything from a custom java code and multiples threads (DB inserter, WS job handlers, etc.) to an EAI middleware (like BW) or even a BPM engine (TIBCO has many solutions for that)
Of course, there are also other vendors... EMS is a JMS standard implementation, as you know.
I would recommend using the built in EMS (& JMS) features,as "guaranteed delivery" is what it's built for ;) - no db needed at all...
You need to be aware that the first decision will be:
do you need to deliver in order? (then only 1 JMS Session and Client Ack mode should be used)
how often and in what reoccuring times do you want to retry? (To not make an infinite loop of a message that couldn't be processed by that web service).
This is independent whatever kind of client you use (TIBCO BW or e.g. Java onMessage() in a MDB).
For "in order" delivery: make shure only 1 JMS Session processes the messages and it uses Client acknolwedge mode. After you process the message sucessfully, you need to acknowledge the message with either calling the JMS API "acknowledge()" method or in TIBCO BW by executing the "commit" activity.
In case of an error you don't execute the acknowledge for the method, so the message will be put back in the Queue for redelivery (you can see how many times it was redelivered in the JMS header).
EMS's Explicit Client Acknolwedge mode also enables you to do the same if order is not important and you need a few client threads to process the message.
For controlling how often the message get's processed use:
max redelivery properties of the EMS queue (e.g. you could put the message in the dead
letter queue afer x redelivery to not hold up other messages)
redelivery delay to put a "pause" in between redelivery. This is useful in case the
Web Service needs to recover after a crash and not gets stormed by the same message again and again in high intervall through redelivery.
Hope that helps
Cheers
Seb

Multithreaded JMS code : CLIENT_ACKNOWLEDGE or transacted session

Edited Question : I am working on a multithreaded JMS receiver and publisher code (stand alone multithreaded java application). MOM is MQSonic.
XML message is received from a Queue, stored procedures(takes 70 sec to execute) are called and response is send to Topic within 90 sec.
I need to handle a condition when broker is down or application is on scheduled shutdown. i.e. a condition in which messages are received from Queue and are being processed in java, in the mean time both Queue and Topic will be down. Then to handle those messages which are not on queue and not send to topic but are in java memory, I have following options:
(1) To create CLIENT_ACKNOWLEDGE session as :
connection.createSession(false, javax.jms.Session.CLIENT_ACKNOWLEDGE)
Here I will acknowledge message only after the successful completion of transactions(stored procedures)
(2) To use transacted session i.e., connection.createSession(true, -1). In this approach because of some exception in transaction (stored procedure) the message is rolled back and Redelivered. They are rolled back again and again and continue until I kill the program. Can I limit the number of redelivery of jms messages from queue?
Also in above two approached which one is better?
The interface progress.message.jclient.ConnectionFactory has a method setMaxDeliveryCount(java.lang.Integer value) where you can set the maximum number of times a message will be redelivered to your MessageConsumer. When this number of times is up, it will be moved to the SonicMQ.deadMessage queue.
You can check this in the book "Sonic MQ Application Programming Guide" on page 210 (in version 7.6).
As to your question about which is better... that depends on whether the stored procedure minds being executed multiple times. If that is a problem, you should use a transaction that spans the JMS queue and the database both (Sonic has support for XA transactions). If you don't mind executing multiple times, then I would go for not acknowledging the message and aborting the processing when you notice that the broker is down (when you attempt to acknowledge the message, most likely). This way, another processor is able to handle the message if the first one is unable to do so after a connection failure.
If the messages take variable time to process, you may also want to look at the SINGLE_MESSAGE_ACKNOWLEDGE mode of the Sonic JMS Session. Normally, calling acknowledge() on a message also acknowledges all messages that came before it. If you're processing them out of order, that's not what you want to happen. In single message acknowledge mode (which isn't in the JMS standard), acknowledge() only acknowledges the message on which it is called.
If you are worried about communicating with a message queue/broker/server/etc that might be down, and how that interrupts the overall flow of the larger process you are trying to design, then you should probably look into a JMS queue that supports clustering of servers so you can still reliably produce/consume messages when individual servers in the cluster go down.
Your question isn't 100% clear, but it seems the issue is that you're throwing an exception while processing a message when you really shouldn't be.
If there is an actual problem with the message, say the xml is malformed or it's invalid according to your data model, you do not want to roll back your transaction. You might want to log the error, but you have successfully processed that message, it's just that "success" in this case means that you've identified the message as problematic.
On the other hand, if there is a problem in processing the message that is caused by something external to the message (e.g. the database is down, or the destination topic is unavailable) you probably do want to roll the transaction back, however you also want to make sure you stop consuming messages until the problem is resolved otherwise you'll end up with the scenario you've described where you continually process the same message over and over and fail every time you try to access whatever resource is currently unavailable.
Without know what messaging provider you are using, I don't know whether this will help you.
MQ Series messages have a backout counter, that can be enabled by configuring the harden backout counter option on the queue.
When I have previously had this problem , I do as follows:
// get/receive message from queue
if ( backout counter > n ) {
move_message_to_app_dead_letter_queue();
return;
}
process_message();
The MQ series header fields are accessible as JMS properties.
Using the above approach would also help if you can use XA transactions to rollback or commit the database and the queue manager simultaneously.
However XA transactions do incur a significant performance penalty and with stored proc's this probably isn't possible.
An alternative approach would be to write the message immediately to a message_table as a blob, and then commit the message from the queue.
Put a trigger on the message_table to invoke the stored proc, and then add the JMS response mechanism into the stored proc.

Scheduled Retrying for an associated JMS Message

I have a single threaded message listener that listens for the incoming messages.The received messages are persisted in a database as they are recieved.
There is a message A and associated message B follows it with a reference it.In Some Odd occurences, B arrives before A. Now in this case, There has to be 3retries after some 'x' equal intervals to see if A has arrived and then persists the association.
As the message listener is single threaded if we put the thread to sleep the whole system would be affected. So there has to be a separate thread retrying.
Can we use Quartz Job Scheduler for this purpose to avoid handling multiThreading issues and to have a PERSISTENT STORE of the in any of the following 2 ways,
Schedule a Job in Quartz for 3 times and keep track of a flag in the JobDataMap to check if previous retries succeeds then return without doing anything
OR
2.Schedule a Job to retry once and then if the retry fails schedule the same job after a few seconds.
Can quartz be used only for repetitive jobs and not have some state information span across job or Is there any other better way to do this.
You should configure your JMS provider to set the redelivery delay on your message queue. In your code, you call context.setRollbackOnly to abort a message that does not pass prerequisites check.
In that case, the code execution scenario becomes:
consume "B", check for prerequisite and detect A is lacking
roll back the transaction and the message returns to the queue, it will be re-delivered after the configured delay
consume and process next message "A"
after the delay, the MDB consumes and processes again "B" with success

Categories