I ask for help, I have jobs scheduled through quartz in a job shop, everything works some jobs that call procedures on Oracle. I would like to put the job in a complete state when a procedure returns a certain result. I can pause the job but not in completed.
if(!result.substring(0,4).equalsIgnoreCase("null")) {
fase_1.callJobResult("JOB_RESULT", key.getName().toString(), key.getGroup().toString(), strJobex+":"+parameter, result);
}else {
// I pause the job because it has been completed
try {
fase_1.callJobResult("JOB_RESULT", key.getName().toString(), key.getGroup().toString(), strJobex+":"+parameter,"SUCCESS. Il job : " + key.getName().toString().toUpperCase() + " job completed");
System.out.println(result);
//Scheduler scheduler;
SchedulerFactory factory = new StdSchedulerFactory();
Scheduler scheduler = factory.getScheduler();
JobKey jobKey = new JobKey(key.getName().toString(), key.getGroup().toString());
//scheduler.pauseJob(jobKey); // example this work
Trigger trigger = jobExecutionContext.getTrigger();
//Trigger.CompletedExecutionInstruction.NOOP.SET_TRIGGER_COMPLETE()
TriggerListner tl = new TriggerListner();
tl.triggerComplete(trigger, jobExecutionContext, CompletedExecutionInstruction.SET_TRIGGER_COMPLETE);
Related
I am working on a Spring Batch application and I am finding some difficulties trying to correctly schedule a job.
I have this class where my job was scheduled:
/**
* This bean schedules and runs our Spring Batch job.
*/
#Component
#Profile("!test")
public class SpringBatchExampleJobLauncher {
private static final Logger LOGGER = LoggerFactory.getLogger(SpringBatchExampleJobLauncher.class);
#Autowired
#Qualifier("launcher")
private JobLauncher jobLauncher;
#Autowired
#Qualifier("updateNotaryDistrictsJob")
private Job updateNotaryDistrictsJob;
#Autowired
#Qualifier("updateNotaryListInfoJob")
private Job updateNotaryListInfoJob;
#Scheduled(cron = "0 */5 * * * *")
public void runUpdateNotaryDistrictsJob() {
LOGGER.info("SCHEDULED run of updateNotaryDistrictsJob STARTED");
Map<String, JobParameter> confMap = new HashMap<>();
confMap.put("time", new JobParameter(System.currentTimeMillis()));
JobParameters jobParameters = new JobParameters(confMap);
try {
jobLauncher.run(updateNotaryDistrictsJob, jobParameters);
}catch (Exception ex){
LOGGER.error(ex.getMessage());
}
}
}
As you can see on my runUpdateNotaryDistrictsJob() method I set this Spring CRON expression:
#Scheduled(cron = "0 */5 * * * *")
in order to start my updateNotaryDistrictsJob every 5 minutes.
The problem is that when I run my application in debug mode I can see that the job is immediately performed (it stop on the first breakpoint). It seems that it is not waiting the 5 minutes set by the cron expression.
What is wrong? How can I try to solve this issue?
The cron expression 0 */5 * * * * does not read as you expect. "It seems that it is not waiting the 5 minutes set by the cron expression.", this is not what the cron expression defines. The cron expression will run at second 0, every 5 minutes starting at minute 0, of every hour, which means that it will never wait 5 minutes after the service has started. As an example, if you started it at 10:22, it will run at 10:25.
If you really need it to wait 5 minutes after service has started, you should consider using #Scheduled(fixedRate = 5000, initialDelay = 5000).
I have a scheduled executor to reset a parameter to 0 and awake all active threads to continue processing. However after initial run of the thread it is not executing again.
ScheduledExecutorService exec = Executors.newScheduledThreadPool(4);
exec.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
logger.info("Setting hourly limit record count back to 0 to continue processing");
lines = 0;
executor.notifyAll();
Thread.currentThread().interrupt();
return;
}
}, 0, 1, TimeUnit.MINUTES);
There is another Executor defined in the class which executes further processes and not sure if this influences it:
ExecutorService executor = Executors.newCachedThreadPool();
for (String processList : processFiles) {
String appName = processList.substring(0,processList.indexOf("-"));
String scope = processList.substring(processList.lastIndexOf("-") + 1);
logger.info("Starting execution of thread for app " + appName + " under scope: " + scope);
try {
File processedFile = new File(ConfigurationReader.processedDirectory + appName + "-" + scope + ".csv");
processedFile.createNewFile();
executor.execute(new APIInitialisation(appName,processedFile.length(),scope));
} catch (InterruptedException | IOException e) {
e.printStackTrace();
}
}
From the documentation for ScheduledExecutorService.scheduleAtFixedRate():
If any execution of the task encounters an exception, subsequent executions are suppressed.
So something in your task is throwing an exception. I would guess the call to executor.notifyAll() which is documented to throw an IllegalMonitorStateException:
if the current thread is not the owner of this object's monitor.
Your scheduled task will most probably end up in a uncaught Exception. Taken from the JavaDoc of ScheduledExecutorService.scheduleAtFixedRate
If any execution of the task encounters an exception, subsequent
executions are suppressed.
Because you are provoking a uncaught exception, all further executions are cancelled.
Is it possible to get information about already executed (finished) jobs? I browsed the javadocs, learned how to fetch JobDetails etc. but can't find way to learn about the jobs that has already been executed (and finished).
any hints?
You can get next trigger time using below code and compare it with cuurent time, if execution time is in past then job has already executed:
Scheduler scheduler = new StdSchedulerFactory().getScheduler();
for (String groupName : scheduler.getJobGroupNames()) {
for (JobKey jobKey : scheduler.getJobKeys(GroupMatcher.jobGroupEquals(groupName))) {
String jobName = jobKey.getName();
String jobGroup = jobKey.getGroup();
//get job's trigger
List<Trigger> triggers = (List<Trigger>) scheduler.getTriggersOfJob(jobKey);
Date nextFireTime = triggers.get(0).getNextFireTime();
Date currTime = new Date();
if(currTime>nextFireTime )
System.out.println("[jobName] : " + jobName + " [groupName] : "
+ jobGroup + " - " + has already executed);
}
}
If you want to keep track of detailed history of all executions for jobs, then you simply have to make an implementation to keep track of all this information. You can use listeners to for this purpose.
Depending on what exactly you're trying to accomplish, you may either use JobListeners ,TriggerListeners or SchedulerListeners.
For 'global' JobListeners:
<initialize JobListeners>
public void jobWasExecuted(JobExecutionContext context, JobExecutionException jobException) {
try
{
jobKey = context.getJobDetail().getKey();
schedulerName = context.getScheduler().getSchedulerName();
jobName = jobKey.getName();
groupName = jobKey.getGroup();
//execution
Date startDate = context.getFireTime();
//execution time
long runTime=context.getJobRunTime();
//execution end
long endDateM = startDate.getTime() + runTime;
Date endDate = new Date(endDateM);
//get more information here
}
catch (Exception e)
{
e.printStackTrace();
}
Note: Please be vary of the performance impact listeners can cause. As mentioned in the Quartz Docs:
One thing that CAN slow down quartz itself is using a lot of listeners
(TriggerListeners, JobListeners, and SchedulerListeners). The time
spent in each listener obviously adds into the time spent "processing"
a job's execution, outside of actual execution of the job.
This
doesn't mean that you should be terrified of using listeners, it just
means that you should use them judiciously - don't create a bunch of
"global" listeners if you can really make more specialized ones. Also
don't do "expensive" things in the listeners, unless you really need
to. Also be mindful that many plug-ins (such as the "history" plugin)
are actually listeners.
I have an application that only implement Map function.
I'm creating 1000 jobs, each with a unique PrefixFilter.
Example:
public void startNewScan(String prefix, long endTime)
Job job = new Job(conf, "MyJob");
job.setNumReduceTasks(0);
Scan scan = new Scan();
scan.setTimeRange(0, endTime);
scan.addColumn(Bytes.toBytes("col"), Bytes.toBytes("Value"));
scan.setFilter(new PrefixFilter(prefix.getBytes()));
TableMapReduceUtil.initTableMapperJob(tableName, scan, ExtractMapper.class, ImmutableBytesWritable.class, Result.class, job);
job.waitForCompletion(true);
}
Now - I don't want to wait for completion, because executing 1000 jobs would take me forever. Creating a thread for each job is also not an option.
Is there anything built in for this usage?
Something like JobsPool that accepts all the jobs and has its own waitForCompletion for all the jobs.
Use:
job.submit();
"Submit the job to the cluster and return immediately."
I am using Quartz as followed:
schedulerFactory = new StdSchedulerFactory();
scheduler = schedulerFactory.getScheduler();
JobDetail startECMSJob = new JobDetail("startECMSJob", "group1", StartECMSJob.class);
Trigger trigger = TriggerUtils.makeMinutelyTrigger(30);
trigger.setName("TriggersGroup1");
trigger.setGroup("group1");
scheduler.scheduleJob(startECMSJob, trigger);
scheduler.start();
Problem is that Quartz starting straight away on deploy. I do want it to start only 30 mins after deploy.
same thign when I rescehduale it. I dont want it to start straight away as it rescheduale.
rescheduale code:
//JobDetail startECMSJob = new JobDetail("startECMSJob", "group1", StartECMSJob.class);
JobDetail jobDetail=jobContext.getJobDetail();
Trigger trigger = TriggerUtils.makeSecondlyTrigger(30);
trigger.setName("aa");
trigger.setGroup("group1");
trigger.setJobName(jobContext.getJobDetail().getName());
trigger.setJobGroup(jobContext.getJobDetail().getGroup());
Scheduler scheduler = jobContext.getScheduler();
scheduler.rescheduleJob("TriggersGroup1", "group1", trigger);
any idea how can i choose the first trigger?
thanks,
ray.
trigger.setStartTime(new Date(System.currentTimeMillis() + 30 * 60 * 1000));