How to save Quartz executed jobs? - java

One problem, after executed jobs, quartz delete jobs from database qrtz_triggers table, but in specific situation needed to repeat job which is failed.
Is any configuration options or way to store jobs to another table after execute???
Thanks

If you are using JDBCJobStore, your jobs are stored in a table like QRTZ_JOB_DETAILS, and your simple triggers are stored in QRTZ_SIMPLE_TRIGGERS, your cron trigers are stored in QRTZ_CRON_TRIGGERS, and all the triggers are stored in QRTZ_TRIGGERS.
If you expect your job is durable and remains when no triggers are associated to the job, you should call storeDurably(true) when building your JobDetail. For example:
JobDetail jobDetail = JobBuilder.newJob()
.ofType(DataMapJob.class)
.withIdentity("dataJob", "dataJobGroup")
.storeDurably(true)
.requestRecovery(true)
.build();
Hope it helps.

This is exactly what the durable flag is for. Durable jobs remain registered in Quartz even if there are no triggers associated with the job. On the other hand, non-durable jobs are automatically deleted by Quartz where there are no associated triggers (e.g. after all associated triggers fired and have been deleted by Quartz).
For details, you can refer to JobDetailImpl javadoc.

Related

Does the quartz scheduler delete triggers?

I basically want to know, if the scheduler itself deletes triggers after they have fired if there are not any other points in time where they would ever fire again?
I need to know this so that I know how to tidy up after a job has been executed.
I have already read through many posts about triggers and jobs. I have also read through all the official quartz lessons. Only thing I found out there was that jobs can be deleted if you set their property "durable" to false when there are no more triggers pointing to it.. That is also how my question came up on how or when the scheduler deletes its triggers
Yes, it automatically removes these triggers.
I've found some documentation for this topic: https://www.quartz-scheduler.org/api/2.1.7/org/quartz/SimpleTrigger.html
there is a line stating:
int getRepeatCount()
Get the the number of times the SimpleTrigger should repeat, after which it will be automatically deleted.

Using #DisallowConcurrentExecution in Quartz scheduler

I am sorry if this question is too naive,
I am expecting the jobs to be scheduled so that it executes one by one, and not parallely.It is executed only once.
From docs, #DisallowConcurrentExecution is
An annotation that marks a {#link Job} class as one that must not have multiple instances executed concurrently (where instance is based-upon a {#link JobDetail} definition - or in other words based upon a {#link JobKey}).
But when I schedule a job with same JobKey, I am getting
Failed to schedule a job org.quartz.ObjectAlreadyExistsException
If I generate a different JobKey, it is not heeding to #DisallowConcurrentExecution and the job is getting executed in parallel(as mentioned in docs).
Please suggest how can I achieve this, any pointers would really help!
PS: I do not know the jobs that would be scheduled. So, I need some method to dynamically link up the jobs,if the job is already running.
Same JobKey = same job.
Different JobKey = different job.
Quartz won't let you use the same JobKey more than once because that'd be two jobs with the same key. Like having two users with the same ID.
What you need to do is schedule different JobTriggers for the same JobKey.
#DisallowConcurrentExecution avoids overlapping executions of the same job. If you use a different JobKey, it's not the same job anymore, so the annotation doesn't have any effect. But for a given JobKey with several JobTriggers, #DisallowConcurrentExecution will keep the triggers from launching a new execution of the job, if the previous one hasn't finished yet.
I suggest having a look at Quartz's documentation to get a deeper understanding of the above concepts.

Quartz scheduler clustered

I'm trying to understand how quartz scheduler works in a clustered environment. I believe pointing the multiple instances of the scheduler app to the same DB and also setting isClustered=true will make sure only one scheduler fires the job at the same time. However, I have the following questions:
Who ensures that only one scheduler executes the job and how?
Can two scheduler instances have the same name (ids are auto, so I guess they will be distinct? (org.quartz.scheduler.instanceName = MyScheduler)
Who sets DB parameters like next fire time?
Ideally, should any of the 11 or so predefined tables (QRTZ_TRIGGERS) be populated? Or they are populated based on the beans in the application upon on app startup?

Creating Quartz Triggers in a Clustered Environment

Related: Quartz Clustering - triggers duplicated when the server starts
I'm using Quartz Scheduler to manage scheduled jobs in a java-based clustered environment. There are a handful of nodes in the cluster at any given time, and they all run Quartz, backed by a data store in a postgresql database that all nodes connect to.
When an instance is initialized, it tries to create or update the jobs and triggers in the Quartz data store by executing this code:
private void createOrUpdateJob(JobKey jobKey, Class<? extends org.quartz.Job> clazz, Trigger trigger) throws SchedulerException {
JobBuilder jobBuilder = JobBuilder.newJob(clazz).withIdentity(jobKey);
if (!scheduler.checkExists(jobKey)) {
// if the job doesn't already exist, we can create it, along with its trigger. this prevents us
// from creating multiple instances of the same job when running in a clustered environment
scheduler.scheduleJob(jobBuilder.build(), trigger);
log.error("SCHEDULED JOB WITH KEY " + jobKey.toString());
} else {
// if the job has exactly one trigger, we can just reschedule it, which allows us to update the schedule for
// that trigger.
List<? extends Trigger> triggers = scheduler.getTriggersOfJob(jobKey);
if (triggers.size() == 1) {
scheduler.rescheduleJob(triggers.get(0).getKey(), trigger);
return;
}
// if for some reason the job has multiple triggers, it's easiest to just delete and re-create the job,
// since we want to enforce a one-to-one relationship between jobs and triggers
scheduler.deleteJob(jobKey);
scheduler.scheduleJob(jobBuilder.build(), trigger);
}
}
This approach solves a number of problems:
If the environment is not properly configured (i.e. jobs/triggers don't exist), then they will be created by the first instance that launches
If the job already exists, but I want to modify its schedule (change a job that used to run every 7 minutes to now run every 5 minutes), I can define a new trigger for it, and a redeploy will reschedule the triggers in the database
Exactly one instance of a job will be created, because we always refer to jobs by the specified JobKey, which is defined by the job itself. This means that jobs (and their associated triggers) are created exactly once, regardless of how many nodes are in the cluster, or how many times we deploy.
This is all well and good, but I'm concerned about a potential race condition when two instances are started at exactly the same time. Because there's no global lock around this code that all nodes in the cluster will respect, if two instances come online at the same time, I could end up with duplicate jobs or triggers, which kind of defeats the point of this code.
Is there a best practice for automatically defining Quartz jobs and triggers in a clustered environment? Or do I need to resort to setting my own lock?
I am not sure if there is a better way to do this in Quartz. But in case you are already using Redis or Memcache, I would recommend letting all instances perform an atomic increment against a well known key. If the code you pasted is supposed to run only one job per cluster per hour, you could do the following:
long timestamp = System.currentTimeMillis() / 1000 / 60 / 60;
String key = String.format("%s_%d", jobId, timestamp);
// this will only be true for one instance in the cluster per (job, timestamp) tuple
bool shouldExecute = redis.incr(key) == 1
if (shouldExecute) {
// run the mutually exclusive code
}
The timestamp gives you a moving window within which jobs are competing to execute this job.
I had (almost) the same problem: How to create triggers and jobs exactly once per software version in clustered environment. I solved the problem by assigning one of the cluster nodes to be a lead node during start-up and letting it to re-create the Quartz jobs. The lead node is the one, which first successfully inserts the git revision number of the running software to the database. Other nodes use the Quartz configuration created by the lead node. Here's complete solution: https://github.com/perttuta/quartz

Java Scheduler Job/Task/Thread per entry in the database table

I want a scheduler that creates a job/task/thread per entry in my database table.
Further, I want a mechanism to start, pause, stop, and restart each job without affecting the other jobs/tasks/threads. At any moment, I should be able to create a new job or delete one.
I am planning to handle all the job related operations mentioned above through a web application hosted on tomcat server.
Which java scheduler should I opt for and how do I start with this?
You can use Quartz Scheduler
In Quartz I would suggest to use JDBJobStore, it allows you to store your jobs in database.
Coming to creating jobs per entry in DB, you can one JobcreatorFactory which reads entry from your DB and you can create job for that and that job will be stored in DB.
Ideally its better to have separate job class(extends JOB) for every job you want to create. But as you want to create jobs dynamically, you can have general job class which takes context of that job and perform respective operation based on the context inside overridden method og your general job class.
Using Quartz its possible to start, stop, pause your jobs without affecting other jobs.
Hope this helps !

Categories