I want to delete a job for which I need job key. I only know the job class name. Please suggest how to get the same using job class name.
You can find this information if you iterate over all jobgroups of your scheduler instances. From there you get the jobKey. With the jobKey you can ask for the jobDetail, which holds the class information. If it matches just return the key.
public JobKey getJobKeyByJobClass (Scheduler scheduler, String className){
for (final String group : scheduler.getJobGroupNames()) {
for (final JobKey jobKey : scheduler.getJobKeys(org.quartz.impl.matchers.GroupMatcher.groupEquals(group))) {
if(className.equals(scheduler.getJobDetail(jobKey).getJobClass().getName())){
return jobKey;
}
}
}
return null;
}
You can obtain JobKey by several ways. Let's imagine that your Job implemenation is MyJob class.
From JobExecutionContext. If your job is executing you can
Scheduler scheduler = schedulerFactory.getScheduler();
JobKey jobKey = null;
for (JobExecutionContext jobCtx : scheduler.getCurrentlyExecutingJobs()) {
JobDetail jobDetail = jobCtx.getJobDetail();
if (MyJob.class.equals(jobDetail.getJobClass())) {
jobKey = jobDetail.getKey();
break;
}
}
same with streams
Scheduler scheduler = schedulerFactory.getScheduler();
Optional<JobDetail> job = scheduler.getCurrentlyExecutingJobs()
.stream()
.map(JobExecutionContext::getJobDetail)
.filter(jobDetail -> MyJob.class.equals(jobDetail.getJobClass()))
.findFirst();
JobKey jobKey = job.isPresent() ? job.get().getKey() : null;
By group name. Ususally, when you're submitting new job for execution you're providing group and job names. If you're not doing so, start, it will make the things easier:)
Scheduler scheduler = schedulerFactory.getScheduler();
JobKey jobKey = null;
for (JobKey jk : scheduler.getJobKeys(GroupMatcher.jobGroupEquals("myGroup"))) {
if (MyJob.class.equals(scheduler.getJobDetail(jk).getJobClass())) {
jobKey = jk;
break;
}
}
same with streams
Scheduler scheduler = schedulerFactory.getScheduler();
Optional<JobDetail> job = scheduler.getJobKeys(GroupMatcher.jobGroupEquals("myGroup"))
.stream()
.map(jk -> scheduler.getJobDetail(jk))
.filter(jobDetail -> MyJob.class.equals(jobDetail.getJobClass()))
.findFirst();
JobKey jobKey = job.isPresent() ? job.get().getKey() : null;
Hope it helps!
Related
My Spring-Batch job is set like that:
#Bean
Job myJob(JobBuilderFactory jobBuilderFactory,
#Qualifier("stepA") Step stepA,
#Qualifier("s"tepB) Step stepB) {
return jobBuilderFactory.get("myJob")
.incrementer(new RunIdIncrementer())
.start(stepA)
.next(stepB)
.build();
}
And here is my launcher:
#Autowired
JobLauncher(#Qualifier("myJob") Job job, JobLauncher jobLauncher) {
this.job = job;
this.jobLauncher = jobLauncher;
}
#Scheduled(fixedDelay=5000)
void launcher() throws JobParametersInvalidException, JobExecutionAlreadyRunningException, JobRestartException, JobInstanceAlreadyCompleteException {
jobLauncher.run(job, newExecution());
}
private JobParameters newExecution() {
Map<String, JobParameter> parameters = new HashMap<>();
this.dateTime = new DateTime(DateTimeZone.UTC);
this.dateTimeString = this.dateTime.toString(ISODateTimeFormat.dateTime());
JobParameter parameter = new JobParameter(this.dateTimeString);
parameters.put("currentTime", parameter);
return new JobParameters(parameters);
}
As you can see, my job is scheduled to launch every 5 seconds.
But, after first launch, it does not end; it goes on the next execution.
The job is like in a loop. I would like it to stop and restart after 5 seconds.
I missed that readers need to return null when they finish. Problem solved.
You can also use System.exit(0); at the end of main class which will terminate JVM resulting in terminatation of batch.
Hi i have two jobs instances in quartz that i want to not run in parallel, i simplified the code in the example below to show what is not in line with my expectations.
public class QuartzTest {
public static void main( String[] args ) throws SchedulerException {
SchedulerFactory schedulerFactory = new StdSchedulerFactory();
Scheduler scheduler = schedulerFactory.getScheduler();
scheduler.start();
JobDetail job1 = newJob( TestJob.class ).withIdentity( "job1", "group1" ).build();
CronTrigger trigger1 = newTrigger().withIdentity( "trigger1", "group1" ).startAt( new Date() ).withSchedule( cronSchedule( getCronExpression( 1 ) ) ).build();
scheduler.scheduleJob( job1, trigger1 );
JobDetail job2 = newJob( TestJob.class ).withIdentity( "job2", "group1" ).build();
CronTrigger trigger2 = newTrigger().withIdentity( "trigger2", "group1" ).startAt( new Date() ).withSchedule( cronSchedule( getCronExpression( 1 ) ) ).build();
scheduler.scheduleJob( job2, trigger2 );
for ( int i = 0; i < 5; i++ ) {
System.out.println( trigger1.getNextFireTime() );
System.out.println( trigger2.getNextFireTime() );
try {
Thread.sleep( 1 * 60 * 1000 );
} catch ( InterruptedException e ) {
e.printStackTrace();
}
}
}
private static String getCronExpression( int interval ) {
return "0 */" + interval + " * * * ?";
}
}
This is the job class
#DisallowConcurrentExecution
public class TestJob implements Job {
#Override
public void execute( JobExecutionContext context ) throws JobExecutionException {
System.out.println( "Job started" );
System.out.println( "Job sleeping 30s..." );
try {
Thread.sleep( 30 * 1000 );
} catch ( InterruptedException e ) {
e.printStackTrace();
}
System.out.println( "Job finished." );
}
}
So here i am scheduling two jobs to run each minute (in the real case one is running every minute the other every 5) and this is the output i am getting:
Job started
Job sleeping 30s...
Job started
Job sleeping 30s...
Job finished.
Job finished.
So both job are running in parallel, because a sequential sequence where job1 waits for job2 to complete before running would give me this sequence
Job started
Job sleeping 30s...
Job finished.
Job started
Job sleeping 30s...
Job finished.
So why is this not happening ?
From docs:
#DisallowConcurrentExecution:
An annotation that marks a Job class as one that must not have multiple instances executed concurrently (where instance is based-upon a JobDetail definition - or in other words based upon a JobKey).
JobKey is composed of both a name and group
In your example the name is not the same, so this is two different jobs.
DisallowConcurrentExecution is ensuring that job1#thread1 is completed before you trigger another job1#thread2.
I am trying to execute a series of jobs where one job execute other two, and one of the two execute another.
Job 1 --> Job 3
--> Job 2 -->Job 4
The jobs are for sending data from db.
This is what i have done
#PersistJobDataAfterExecution
#DisallowConcurrentExecution
public class MembersJob implements Job{
List<Member>unsentMem=new ArrayList<Member>();
JSONArray customerJson = new JSONArray();
Depot depot;
public MembersJob(){
depot = new UserHandler().getDepot();
}
public void execute(JobExecutionContext jec) throws JobExecutionException {
if ( Util.getStatus() ){
runSucceedingJobs(jec);
} else {
System.out.println("No internet connection");
}
}
public void runSucceedingJobs(JobExecutionContext context){
JobDataMap jobDataMap = context.getJobDetail().getJobDataMap();
Object milkCollectionJobObj = jobDataMap.get("milkCollectionJob");
MilkCollectionsJob milkCollectionsJob = (MilkCollectionsJob)milkCollectionJobObj;
Object productsJobObj = jobDataMap.get("milkCollectionJob");
ProductsJob productsJob = (ProductsJob)productsJobObj;
try {
milkCollectionsJob.execute(context);
productsJob.execute(context);
} catch (JobExecutionException ex) {
System.out.println("Error...");
Logger.getLogger(MembersJob.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
Call jobs in series
//Members Job
JobKey membersJobKey = new JobKey("salesJob", "group1");
JobDetail membersJob = JobBuilder.newJob(MembersJob.class)
.withIdentity(membersJobKey).build();
membersJob.getJobDataMap().put("milkCollectionJob", new MilkCollectionsJob());
membersJob.getJobDataMap().put("productsJob", new ProductsJob());
//
CronTrigger membersTrigger = newTrigger()
.withIdentity("salesTrigger", "group1")
.withSchedule(
cronSchedule("0/10 * * * * ?"))
.forJob(membersJobKey)
.build();
Scheduler scheduler = new StdSchedulerFactory().getScheduler();
scheduler.scheduleJob(membersJob, membersTrigger);
scheduler.start();
The problem is members job starts but does not start other jobs when it is done. What is the easiest and fastest way to achieve this?
I want to catch a hibernate database exception in my quartz job class, please find my quartz listener and quart job class, I'm trying to test my quartz scheduler with database failure scenario, while executing my quartz job I'm stopping mysql db to verify my quartz job class catching hibernate exception or not.
But the code below does not catch if any db exception occurs.
Can someone help what I'm doing wrong here:
MyListener.java
#WebListener
public class MyListener extends QuartzInitializerListener {
private final Logger logger = LoggerFactory.getLogger(MyListener.class);
private Scheduler scheduler;
#Override
public void contextInitialized(ServletContextEvent sce) {
super.contextInitialized(sce);
ServletContext ctx = sce.getServletContext();
StdSchedulerFactory factory = (StdSchedulerFactory) ctx.getAttribute(QUARTZ_FACTORY_KEY);
try {
scheduler = factory.getScheduler();
JobDetail job = newJob(TestQuartzJob.class)
.withIdentity("job1", "group1")
.build();
CronTrigger trigger = newTrigger()
.withIdentity("trigger1", "group1")
.withSchedule(cronSchedule("0/20 * * * * ?"))
.build();
scheduler.scheduleJob(job, trigger);
scheduler.start();
} catch (Exception e) {
ctx.log("There was an error scheduling the job.", e);
}
}
TestQuartzJob.java
public class TestQuartzJob implements Job throws JobExecutionException{
Logger logger = Logger.getLogger(TestQuartzJob.class);
Session mysession = HibernateUtilities.getJobsSessionFactory().openSession();
Transaction transaction = mysession.beginTransaction();
#Override
public void execute(JobExecutionContext jec) {
try {
Student getstudName = (Student) mysession.
createCriteria(Student).add(eq(“student_id", “stud1234")).uniqueResult();
String name = getstudName.getName();
} catch (Exception e) {
logger.info("--- Error in job!");
JobExecutionException e2 = new JobExecutionException(e);
// this job will refire immediately
e2.refireImmediately();
throw e2;
}
}
}
this is my code :
public class Test {
public static void main(String[] args) throws Exception {
String logPath = "D:\\mywork\\OMS\\Tymon\\testlog\\testlog.log";
File file = new File(logPath);
SchedulerFactory schedFact = new StdSchedulerFactory();
Scheduler sched = schedFact.getScheduler();
sched.start();
JobDetail jobDetail = new JobDetail("a", "b", TestJob.class);
CronTrigger trigger = new CronTrigger("c", "d");
trigger.setCronExpression("0/23 * * * * ?");
sched.scheduleJob(jobDetail, trigger);
}
}
when the job is running, the file "D:\mywork\OMS\Tymon\testlog\testlog.log" can't be renamed and deleted.
it seems like the file handle always be held
how fix it ?
please help ~
Why you create File file = new File(logPath) object.
Seems you never used in else where in your logic.
you opened the file:
File file = new File(logPath);
but where did you close it?