Given that I have two scheduled component classes uploading files respectively.
I created a sending email method for each of them in order to send a reminder email to myself in case any uploading exceptions happened.
the flow like this:
Scheduler One --- if exception during uploading ---> sending a email after exception
Scheduler Two --- if exception during uploading ---> sending a email after exception
now I want to upgrade as
Scheduler One + Scheduler Two
--if exception--> sending a mail after two scheduler
Nonetheless, how can I do that?
You use case sounds really odd. Schedulers run independent. So if you want to share information (an exception was thrown) between both thos you have to store this information somewhere. A entry in a database or saving in in a global variable during runtime.
I would however suggest that you merge both of you scheduler into one. If they are not independent why divide the code? It saves you from creating theses hacks where the schedulers need to be connected
Related
We have a java/spring application which runs on EKS pods and we have records stored in MongoDB collection.
STATUS: READY,STARTED,COMPLETED
Application needs to pick the records which are in READY status and update the status to STARTED. Once the processing of the record is completed, the status will be updated to COMPLETED
Once the record is STARTED, it may take few hours to complete, until then other pods(other instance of the same app) should not pick this record. If some exception occurs, the app changes the status to READY so that other pods(or the same pod) can pick the READY record for processing.
Requirement: If the pod crashes when the record is processing(STARTED) but crashes before changing the status to READY/COMPLETED, the other pod should be able to pick this record and start processing again.
We have some solution in mind but trying to find the best solution. Request you to help me with some best approaches.
You can use a shutdown hook from spring:
#Component
public class Bean1 {
#PreDestroy
public void destroy() {
## handle database change
System.out.println(Status changed to ready);
}
}
Beyond that, that kind of job could run better in a messaging architecture, using SQS for example. Instead of using the status on the database to handle and orchestrate the task, you can use an SQS, publish the message that needs to be consumed (the messages that were in ready state) and have a poll of workers consuming messages from this SQS, if something crashes or the pod of this workers needs to be reclaimed, the message goes back to SQS and can be consumed by another pod.
I have a requirement to process a list of large number of users daily to send them email and SMS notifications based on some scenario. I am using Java EE batch processing model for this. My Job xml is as follows:
<step id="sendNotification">
<chunk item-count="10" retry-limit="3">
<reader ref="myItemReader"></reader>
<processor ref="myItemProcessor"></processor>
<writer ref="myItemWriter"></writer>
<retryable-exception-classes>
<include class="java.lang.IllegalArgumentException"/>
</retryable-exception-classes>
</chunk>
</step>
MyItemReader's onOpen method reads all users from database, and readItem() reads one user at a time using list iterator. In myItemProcessor, the actual email notification is sent to user, and then the users are persisted in database in myItemWriter class for that chunk.
#Named
public class MyItemReader extends AbstractItemReader {
private Iterator<User> iterator = null;
private User lastUser;
#Inject
private MyService service;
#Override
public void open(Serializable checkpoint) throws Exception {
super.open(checkpoint);
List<User> users = service.getUsers();
iterator = users.iterator();
if(checkpoint != null) {
User checkpointUser = (User) checkpoint;
System.out.println("Checkpoint Found: " + checkpointUser.getUserId());
while(iterator.hasNext() && !iterator.next().getUserId().equals(checkpointUser.getUserId())) {
System.out.println("skipping already read users ... ");
}
}
}
#Override
public Object readItem() throws Exception {
User user=null;
if(iterator.hasNext()) {
user = iterator.next();
lastUser = user;
}
return user;
}
#Override
public Serializable checkpointInfo() throws Exception {
return lastUser;
}
}
My problem is that checkpoint stores the last record that was executed in the previous chunk. If I have a chunk with next 10 users, and exception is thrown in myItemProcessor of the 5th user, then on retry the whole chunck will be executed and all 10 users will be processed again. I don't want notification to be sent again to the already processed users.
Is there a way to handle this? How should this be done efficiently?
Any help would be highly appreciated.
Thanks.
I'm going to build on the comments from #cheng. My credit to him here, and hopefully my answer provides additional value in organizing and presenting the options usefully.
Answer: Queue up messages for another MDB to get dispatched to send emails
Background:
As #cheng pointed out, a failure means the entire transaction is rolled back, and the checkpoint doesn't advance.
So how to deal with the fact that your chunk has sent emails to some users but not all? (You might say it rolled back but with "side effects".)
So we could restate your question then as: How to send email from a batch chunk step?
Well, assuming you had a way to send emails through an transactional API (implementing XAResource, etc.) you could use that API.
Assuming you don't, I would do a transactional write to a JMS queue, and then send the emails with a separate MDB (as #cheng suggested in one of his comments).
Suggested Alternative: Use ItemWriter to send messages to JMS queue, then use separate MDB to actually send the emails
With this approach you still gain efficiency by batching the processing and the updates to your DB (you were only sending the emails one at a time anyway), and you can benefit from simple checkpointing and restart without having to write complicated error handling.
This is also likely to be reusable as a pattern across batch jobs and outside of batch even.
Other alternatives
Some other ideas that I don't think are as good, listed for the sake of discussion:
Add batch application logic tracking users emailed (with ItemProcessListener)
You could build your own list of either/both successful/failed emails using the ItemProcessListener methods: afterProcess and onProcessError.
On restart, then, you could know which users had been emailed in the current chunk, which we are re-positioned to since the entire chunk rolled back, even though some emails have already been sent.
This certainly complicates your batch logic, and you also have to persist this success or failure list somehow. Plus this approach is probably highly specific to this job (as opposed to queuing up for an MDB to process).
But it's simpler in that you have a single batch job without the need for a messaging provider and a separate app component.
If you go this route you might want to use a combination of both a skippable and a "no-rollback" retryable exception.
single-item chunk
If you define your chunk with item-count="1", then you avoid complicated checkpointing and error handling code. You sacrifice efficiency though, so this would only make sense if the other aspects of batch were very compelling: e.g. scheduling and management of jobs through a common interface, the ability to restart at the failing step within a job
If you were to go this route, you might want to consider defining socket and timeout exceptions as "no-rollback" exceptions (using ) since there's nothing to be gained from rolling back, and you might want to retry on a network timeout issue.
Since you specifically mentioned efficiency, I'm guessing this is a bad fit for you.
use a Transaction Synchronization
This could work perhaps, but the batch API doesn't especially make this easy, and you still could have a case where the chunk completes but one or more email sends fail.
Your current item processor is doing something outside the chunk transaction scope, which has caused the application state to be out of sync. If your requirement is to send out emails only after all items in a chunk have successfully completed, then you can move the emailing part to a ItemWriterListener.afterWrite(items).
I am using Spring 4.0.2 for my Web Application. My Web Application is about file processing. There are some statues about files like "In Progress", "On Hold", "Completed". One user can complete multiple files, but only one at a time. So at a time only one file must be "In Progress" for a single user. Now, I want to check after every 15 mins whether is there any event occurred with particular file or not. If there is no event occurred, I want to change file status from "In Progress" to "On Hold". So that I tried to write Scheduler in Spring as given below.
#Scheduler(fixedDelay = 15*60*1000)
public void checkFrequently()
{
// here I am doing some operation to check for any event occurred in last 15 min or not.
// here, I need HttpSession for two purposes.
// 1. to get current logged in user
// 2. to get current file for current user
}
Is there any possibility to get session in this method? If it is impossible, what are the alternatives?
It is not possible. The scheduler is started at application launch, when there is no session, and runs in a thread separated from the servlet container.
Usually, you will persist in some form the states that you would like to make accessible by bean managed by the scheduler (being in a database, a plain file, a queue, etc...)
I have a requirement where I want to trigger an event based on some action, and this functionality is to be implemented as a jar file.
Lets explain this with an example.
There is a web application WAR_FILE.
There is a rest client named REST_CLIENT.
There is a jar file that has api methods for client REST_CLIENT named as MY_JAR.
Now WAR_FILE will be using MY_JAR to post data to REST_CLIENT.
But WAR_FILE does not want to wait for its response.Its like post data and do not care for response.
MY_JAR will take all inputs from WAR_FILE and store it in a queue as cache.I am using redis to maintain this queue as cache.
The main problem is MY_JAR has to check every time that if there is any request in that queue to act upon.
Sol 1: use quartz in MY_JAR to check after every n seconds,if there is any new request to act upon.
problem1 : The WAR_FILE may itself be using some quartz.
problem2 : If one thread is executing a list of tasks from queue and other thread comes and start executing the same request.
Sol 2: use cron job
problem : problem2 in sol 1
Sol 3 : RabitMQ / ActiveMQ (just heard of it)
problem : does not know how to use it and how it could be helping me.
Please help me.
I found various solutions to this problem.Actually this is JMS.(Previously I was not aware of this technology)
(1)Using Redis pub/sub event publication
http://redis.io/topics/pubsub
for simple java : http://www.basrikahveci.com/a-simple-jedis-publish-subscribe-example/
for Spring : http://java.dzone.com/articles/redis-pubsub-using-spring
(2)Using RabbitMQ
RabbitMQ installation : https://www.rabbitmq.com/install-debian.html
Java example : http://www.rabbitmq.com/tutorials/tutorial-one-java.html
Creating a task queue in appengine is failing with the following Exception
com.google.appengine.api.taskqueue.QueueApiHelper.translateError(QueueApiHelper.java:86)
com.google.appengine.api.taskqueue.QueueImpl.add(QueueImpl.java:505)
com.google.appengine.api.taskqueue.QueueImpl.add(QueueImpl.java:427)
com.google.appengine.api.taskqueue.QueueImpl.add(QueueImpl.java:412)
It is working fine in the local dev server for a while before it fails with the following exception
com.google.appengine.api.taskqueue.QueueApiHelper.translateError(QueueApiHelper.java:74)
com.google.appengine.api.taskqueue.QueueImpl.add(QueueImpl.java:505)
com.google.appengine.api.taskqueue.QueueImpl.add(QueueImpl.java:427)
com.google.appengine.api.taskqueue.QueueImpl.add(QueueImpl.java:412)
Here is the code I am using
TaskOptions taskOption = TaskOptions.Builder.withUrl(servletPath).countdownMillis(time)
taskOption.taskName(name)
Queue queue = QueueFactory.getQueue(taskQueue)
queue.add(taskOption)
It seems that local server is more lax with the taskName. Exception at QueueApiHelper.java:86 will occur in appengine if the name has uppercase or if the name is being reused (Hopefully, appengine will allow the name to be reused after certain period. I have notice that it won't allow reusing the name with 5 minutes.) . Exception at QueueApiHelper.java:74 in local server will occur if two tasks with the same name is created at the same time. So here is the (descriptive)fix
taskOption.taskName(name.toLowerCase().replaceAll("[^a-z0-9]+","") + System.currentTimeMillis())