I am working on an application where we have 100 of jobs that's needs to be schedules for executions.
Here is my sample quartz.property file
org.quartz.scheduler.instanceName=QuartzScheduler
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.threadPool.threadCount=7
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.MSSQLDelegate
org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.jobStore.dataSource = myDS
org.quartz.dataSource.myDS.driver=com.mysql.jdbc.Driver
org.quartz.dataSource.myDS.URL=jdbc:mysql://localhost:3306/quartz
org.quartz.dataSource.myDS.user=root
org.quartz.dataSource.myDS.password=root
org.quartz.dataSource.myDS.maxConnections=5
Though this is working fine, but we are planning to separates jobs in different groups so that it can be easy to maintain them.
Groups will be unique and we want that if a user(Admin) creates a new group a new instance of scheduler should get created and all jobs within that group should be handled by that scheduler instance in future.
This means if the Admin creates a new group say NewProductNotification than we should be able to create a scheduler instance with same name NewProductNotification and all jobs which are parts of the NewProductNotification group should be handeled by NewProductNotification instance of scheduler.
How is this possible and how can we store this information in the Database so that next time when the server is up Quartz should have knowledge about all the scheduler instances or do we need to add this information about new instance in property file.
As the proprty file above showing , we are using jdbcjobstore to handle everything using database.
I don't think dynamically creating schedulers is a good desing approach in Quartz. You can share the same database tables for multiple schedulers (job details and triggers have scheduler name as part of their primary key) but Scheduler is kind of heavyweight.
Can you explain why do you relly need separate schedulers? Maybe you can simply use Job groups and triggers groups (you are in fact using the term group) to distinguish jobs/groups? Also you can use different priorities for each trigger.
As a side note:
I'm using JobStoreCMT and I'm seeing deadlocks, what can I do?
Make sure you have at least number-of-threads-in-thread-pool + 2 connections in your datasources.
And in your configuration (reverse the values and it will be fine):
org.quartz.threadPool.threadCount=7
org.quartz.dataSource.myDS.maxConnections=5
From: I'm using JobStoreCMT and I'm seeing deadlocks, what can I do?
Dynamically creating schedules is very much possible. You would need to create objects of JobDetail and Trigger and pass to the SchedulerFactoryBean object. It will take care of the rest.
Related
I'm trying to understand how quartz scheduler works in a clustered environment. I believe pointing the multiple instances of the scheduler app to the same DB and also setting isClustered=true will make sure only one scheduler fires the job at the same time. However, I have the following questions:
Who ensures that only one scheduler executes the job and how?
Can two scheduler instances have the same name (ids are auto, so I guess they will be distinct? (org.quartz.scheduler.instanceName = MyScheduler)
Who sets DB parameters like next fire time?
Ideally, should any of the 11 or so predefined tables (QRTZ_TRIGGERS) be populated? Or they are populated based on the beans in the application upon on app startup?
Related: Quartz Clustering - triggers duplicated when the server starts
I'm using Quartz Scheduler to manage scheduled jobs in a java-based clustered environment. There are a handful of nodes in the cluster at any given time, and they all run Quartz, backed by a data store in a postgresql database that all nodes connect to.
When an instance is initialized, it tries to create or update the jobs and triggers in the Quartz data store by executing this code:
private void createOrUpdateJob(JobKey jobKey, Class<? extends org.quartz.Job> clazz, Trigger trigger) throws SchedulerException {
JobBuilder jobBuilder = JobBuilder.newJob(clazz).withIdentity(jobKey);
if (!scheduler.checkExists(jobKey)) {
// if the job doesn't already exist, we can create it, along with its trigger. this prevents us
// from creating multiple instances of the same job when running in a clustered environment
scheduler.scheduleJob(jobBuilder.build(), trigger);
log.error("SCHEDULED JOB WITH KEY " + jobKey.toString());
} else {
// if the job has exactly one trigger, we can just reschedule it, which allows us to update the schedule for
// that trigger.
List<? extends Trigger> triggers = scheduler.getTriggersOfJob(jobKey);
if (triggers.size() == 1) {
scheduler.rescheduleJob(triggers.get(0).getKey(), trigger);
return;
}
// if for some reason the job has multiple triggers, it's easiest to just delete and re-create the job,
// since we want to enforce a one-to-one relationship between jobs and triggers
scheduler.deleteJob(jobKey);
scheduler.scheduleJob(jobBuilder.build(), trigger);
}
}
This approach solves a number of problems:
If the environment is not properly configured (i.e. jobs/triggers don't exist), then they will be created by the first instance that launches
If the job already exists, but I want to modify its schedule (change a job that used to run every 7 minutes to now run every 5 minutes), I can define a new trigger for it, and a redeploy will reschedule the triggers in the database
Exactly one instance of a job will be created, because we always refer to jobs by the specified JobKey, which is defined by the job itself. This means that jobs (and their associated triggers) are created exactly once, regardless of how many nodes are in the cluster, or how many times we deploy.
This is all well and good, but I'm concerned about a potential race condition when two instances are started at exactly the same time. Because there's no global lock around this code that all nodes in the cluster will respect, if two instances come online at the same time, I could end up with duplicate jobs or triggers, which kind of defeats the point of this code.
Is there a best practice for automatically defining Quartz jobs and triggers in a clustered environment? Or do I need to resort to setting my own lock?
I am not sure if there is a better way to do this in Quartz. But in case you are already using Redis or Memcache, I would recommend letting all instances perform an atomic increment against a well known key. If the code you pasted is supposed to run only one job per cluster per hour, you could do the following:
long timestamp = System.currentTimeMillis() / 1000 / 60 / 60;
String key = String.format("%s_%d", jobId, timestamp);
// this will only be true for one instance in the cluster per (job, timestamp) tuple
bool shouldExecute = redis.incr(key) == 1
if (shouldExecute) {
// run the mutually exclusive code
}
The timestamp gives you a moving window within which jobs are competing to execute this job.
I had (almost) the same problem: How to create triggers and jobs exactly once per software version in clustered environment. I solved the problem by assigning one of the cluster nodes to be a lead node during start-up and letting it to re-create the Quartz jobs. The lead node is the one, which first successfully inserts the git revision number of the running software to the database. Other nodes use the Quartz configuration created by the lead node. Here's complete solution: https://github.com/perttuta/quartz
We need to allow the users to import the huge catalog into the application. How do we achieve this with Spring batch as the Job is singleton in Spring Batch. How do we tweak it so that I can invoke the same job any number of times with thread safety. We are fine with synchronous processing and not looking for Async .Appreciate your inputs.
Even though the job configuration is a singleton, each job instance is created from the job configuration as a new object by the job launcher, so you should have no problems with concurrency.
It sounds like multiple updates are going to be happening in an unsafe way to your database. E.g. if you have table 1 row 1 being updated by Job1 and another user kicks off Job2 there's no guarantee what values you'll end up with in row 1. I wouldn't be concerned about thread safety so much as row level concurrency safety. Typically if you only want a single import to run at a time the solution is not something like Spring, but a database specific import tool.
UPDATE:
See this SO answer for how to customize Spring Batch to only allow one job to run at at time. Note - this has nothing to do with thread safety. This is not how Spring Batch is typically used which is why this isn't listed as a normal use case in their docs.
Spring batch restrict single instance of job only
I want a scheduler that creates a job/task/thread per entry in my database table.
Further, I want a mechanism to start, pause, stop, and restart each job without affecting the other jobs/tasks/threads. At any moment, I should be able to create a new job or delete one.
I am planning to handle all the job related operations mentioned above through a web application hosted on tomcat server.
Which java scheduler should I opt for and how do I start with this?
You can use Quartz Scheduler
In Quartz I would suggest to use JDBJobStore, it allows you to store your jobs in database.
Coming to creating jobs per entry in DB, you can one JobcreatorFactory which reads entry from your DB and you can create job for that and that job will be stored in DB.
Ideally its better to have separate job class(extends JOB) for every job you want to create. But as you want to create jobs dynamically, you can have general job class which takes context of that job and perform respective operation based on the context inside overridden method og your general job class.
Using Quartz its possible to start, stop, pause your jobs without affecting other jobs.
Hope this helps !
I need to create and store a single instance of an object in the AppEngine datastore (there will never need to be more than one object).
It is the last run time for a cron job that I am scheduling.
The idea is that the cron job will only pick up those rows that have been created/updated since its last run for processing, and will update the last run time after it has completed,
What is the best way to do this considering concurrency issues as well - in case a previous job has not finished running?
If I understand your question correctly, it sounds like you could just create a 'job bookkeeping' entity that records whether a job is currently running, along with any necessary state about what you are processing with the job.
Then, access that bookkeeping entity using a transaction, so that only one process can do a read + update on it at a time. That will let you safely check whether another job is still running before starting a new job.
(The datastore is non-relational, so I am guessing with your mention of 'rows', you instead mean entities of some Kind that you need to process? Your bookkeeping entity could store some state about which of these entities you'd processed so far, that would let you query for new ones to process).