Add multiple triggers to single quartz job - java

I want to dynamically add triggers to a job, but can't find any helpful methods off of Scheduler
I though i would just be able to call the scheduleJob method an repetitively, but this gives me tthe ObjectAlreadyExists Exception "because one already exists with this identification".
How can i do this?
EDIT
private boolean scheduleLoadJob( XfuScheduleTimeInfo time )
{
LoadScheduleJob job = new LoadScheduleJob( time );
JobDetail detail;
Integer id = Integer.valueOf( time.getScheduleId() );
if( _hashMap.containsKey( id ) )
{
detail = _hashMap.get( Integer.valueOf( time.getScheduleId() ) );
}
else
{
detail = job.getDetail();
_hashMap.put( id, detail );
}
try
{
Trigger newTrigger = job.getTrigger();
_log.debug( "------" + newTrigger.getKey() );
_quartzScheduler.scheduleJob( detail, newTrigger );
return true;
}
catch( ParseException e )
{
_log.error( "Unable to parse cron expression for " + job.getInfo() );
return false;
}
catch( SchedulerException e )
{
_log.error( "Job scheduling failed for " + job.getInfo() );
return false;
}
}
With Console Output
------ LoadJobs.Trigger-44
batch acquisition of 1 triggers
Producing instance of Job 'LoadJobs.Job-42', class=com.scheduling.LoadScheduleJob
Calling execute on job LoadJobs.Job-42
batch acquisition of 1 triggers
Job called for: 42 : 44
------ LoadJobs.Trigger-45
Job scheduling failed for 42 : 45 - 1/5 * * ? * *

This post gives a hint, but the conclusion ( schedulerInstance.add(trigger) ) is not valid as of Quartz 2.01.
Instead use the following, after assinging the job to the trigger ( one way is using the TriggerBuilder's forJob method )
schedulerInstance.scheduleJob( newTrigger )

CronTrigger trigger=null;
CronTrigger trigger1=null;
CronTrigger trigger2=null;
JobDetail job = new JobDetail();
job.setName("dummyJobName");
job.setJobClass(ExampleJob.class);
trigger = new CronTrigger();
trigger.setName("AppTrigger");
trigger.setGroup(job.getGroup());
trigger.setJobName(job.getName());
trigger.setJobGroup(job.getGroup());
trigger.setCronExpression("*/2 * * * * ?");
trigger1 = new CronTrigger();
trigger1.setName("AppTrigger1");
trigger1.setGroup(job.getGroup());
trigger1.setJobName(job.getName());
trigger1.setJobGroup(job.getGroup());
trigger1.setCronExpression("*/2 * * * * ?");
trigger2 = new CronTrigger();
trigger2.setName("AppTrigger2");
trigger2.setGroup(job.getGroup());
trigger2.setJobName(job.getName());
trigger2.setJobGroup(job.getGroup());
trigger2.setCronExpression("*/2 * * * * ?");
Scheduler scheduler = new StdSchedulerFactory().getScheduler();
scheduler.start();
scheduler.addJob(job, true);
scheduler.scheduleJob(trigger);
scheduler.scheduleJob(trigger1);
scheduler.scheduleJob(trigger2);

You can call scheduleJob repeatedly. Just make sure that you give each Trigger a unique name/group.
See TriggerBuilder.withIdentity: http://www.quartz-scheduler.org/docs/api/2.0.0/index.html

Related

Batch Job RE-scheduling using Java , CronExpression

I have a job running which runs on a particular time every day.
I have to update the CronExpression of the Job using some logic, problem is once the CronExpression is update I have to reStart the server to run that job with new time.
How can I restart the job without re-starting my server.
I have searched through other problem but not able to find my solution.
My job to update the cronExpression for another job is as:
public void reSchedulerNotofocationjob(){
List<Slots> slots=config.getActiveConfiguration().getSlotDetils();
int size=slots.size();
LocalTime slotTime;
LocalTime currenttime=new LocalTime();
String cronExp;
BatchJobs batchJobs= mongoOperations.findById("5b1f69c21f74e5ecc0c607ea", BatchJobs.class);
logger.debug("batch job id and job name "+batchJobs.getId()+" and "+batchJobs.getJobName());
for(int i=0; i<size;i++){
slotTime= LocalTime.parse(slots.get(i).getSlotTime());
if(currenttime.isBefore(slotTime)){
cronExp=slotTime.getSecondOfMinute()+" "+slotTime.getMinuteOfHour()+" "+slotTime.getHourOfDay()+" * * ? *";
mongoOperations.findAndModify(new Query(Criteria.where("_id").is("5b1f69c21f74e5ecc0c607ea")), new Update().set("cronExpression", cronExp), BatchJobs.class);
break;
}else{
if(i+1<size){
slotTime= LocalTime.parse(slots.get(i+1).getSlotTime());
if(currenttime.isBefore(slotTime)){
cronExp=slotTime.getSecondOfMinute()+" "+slotTime.getMinuteOfHour()+" "+slotTime.getHourOfDay()+" * * ? *";
mongoOperations.findAndModify(new Query(Criteria.where("_id").is("5b1f69c21f74e5ecc0c607ea")), new Update().set("cronExpression", cronExp), BatchJobs.class);
break;
}
}else{
slotTime= LocalTime.parse(slots.get(0).getSlotTime());
cronExp=slotTime.getSecondOfMinute()+" "+slotTime.getMinuteOfHour()+" "+slotTime.getHourOfDay()+" * * ? *";
mongoOperations.findAndModify(new Query(Criteria.where("_id").is("5b1f69c21f74e5ecc0c607ea")), new Update().set("cronExpression", cronExp), BatchJobs.class);
break;
}
}
}
}
How to restart the batch job..

Java Concurrency more than one thead does not gets executed in threadpool

I'm having a project where I have a client which makes calls to the webserver. The webserver continuously opens one connection to another server which serves a XML file. This XML file gets converted into Java objects. To perform these actions, I use a thread pool.
First there is this worker thread with a while loop. In the loop I call another method to retrieve the XML data.
public static void startXmlRetrieval( String sbeSystem, String params ) throws Exception {
Processable xmlEventTask = sTaskWorkQueue.poll();
if ( xmlEventTask == null )
xmlEventTask = new EventXMLTask( sInstance );
else if ( !(xmlEventTask instanceof EventXMLTask) )
xmlEventTask = new EventXMLTask( sInstance );
((EventXMLTask) xmlEventTask).setXmlRetrievalParams( params );
((EventXMLTask) xmlEventTask).setXmlRetrievalAddr( sbeSystem );
sTaskWorkQueue.add( xmlEventTask );
synchronized ( sLockObject ) {
System.out.println( "Initiating thread to retrieve XML feed" );
sThreadPool.execute( ((EventXMLTask) xmlEventTask).getTaskRunnable());
// send the continuous thread from the ServerSessionHandler class to wait for this execution to finish
sLockObject.wait();
}
}
The above method executes and comes back to the object where the method is declared.
#Override
public void handleState( Processable task, int state ) throws Exception {
switch ( state ) {
case XML_RETRIEVAL_FAILED:
recycleXMLTask( (EventXMLTask) task );
//throw new Exception( "XML retrieval failed" );
case XML_RETRIEVAL_COMPLETED:
synchronized ( sLockObject ) {
ConvertManager.startXMLConversion( sInstance, sXmlData );
}
break;
case DATABASE_RETRIEVAL_FAILED:
case DATABASE_RETRIEVAL_COMPLETED:
recycleDatabaseTask( (DatabaseTask) task );
break;
case UNX_COMMAND_EXEC_FAILED:
case UNX_COMMAND_EXEC_COMPLETED:
recycleUnxCommandExecTask( (CommandOutputRetrievalTask) task );
break;
case ConvertManager.XML_CONVERSION_COMPLETED:
synchronized ( sLockObject ) {
removeXMLEventRetrieval( (TaskBase) task );
sLockObject.notifyAll();
}
break;
}
}
In the cas section "XML_RETRIEVAL_COMPLETED" i want to pass another thread to the same threadpool to execute the conversion of the XML data.
The problem is that the method ConvertManager.startXMLConversion is executed, but when it comes to a submit on the threadpool for submittng a callable (FutureTask), this call method is not executed anymore.
For the Thread group in the debugger, it says "WAIT" and it's currently stuck in the Unsafe.park method which is called from the FutureTask.awaitDone method.
Please help me with what the thread is waiting for as I used the synchronized statement for the one thread wait for the other one, but the other one is only executed to a certain point and the stops. I also tried playing around with notify and notifyAll on the sLockObject without any success.
The ConvertManager.startXmlConversion method looks as follows:
public static List<XMLEventData> startXMLConversion( AbstractManager mng, Document xmlDocument ) throws Exception {
sInstance.mCallingManager = mng;
List<XMLEventData> retVal;
Processable converterTask = sInstance.sTaskWorkQueue.poll();
try {
if ( converterTask == null )
converterTask = new XMLToXMLEventConverterTask( sInstance );
else if ( !(converterTask instanceof XMLToXMLEventConverterTask) )
converterTask = new XMLToXMLEventConverterTask( sInstance );
else if ( ((XMLToXMLEventConverterTask) converterTask).getTaskCallable() == null ) {
converterTask = new XMLToXMLEventConverterTask( sInstance );
}
sTaskWorkQueue.add( converterTask );
((XMLToXMLEventConverterTask) converterTask).setXmlDocument( xmlDocument );
System.out.println( "Starting new thread to convert XML data" );
retVal = (List<XMLEventData>) sThreadPool.submit( ((XMLToXMLEventConverterTask) converterTask).getTaskRunnable() ).get();
} catch ( Exception e ) {
e.printStackTrace();
throw new Exception( e );
}
return retVal;
}
Thank you in advance!

App Engine Pull Queue tasks disappear before being properly handled

Update 4 - rephrasing question for clarity
I am using Pull Queues to feed back-end workers tasks that send push notifications. I can see the front-end instance queue the task in the logs. However, the task is only occasionally handled by the back-end. I see no indication of why the task disappears prior to being handled and deleted from the queue.
This may be related: I am seeing an unusually high number of TransientFailureExceptions when attempting to lease tasks from the queue - despite sleeping between attempts.
Everything works properly on my development server (and an earlier version had worked in production) but production is no longer working properly. At first I thought it was a certificate issue. However, notifications are sometimes sent when the backend first starts.
There is no indication that an error is happening except for the TransientFailureException when I call leaseTasks on the queue. Also, it seems to take a very long time for my logs to show up.
I can provide more information and code snippets as needed.
Thanks for the help.
Update 1:
The application uses 10 pull queues. It would normally use 2 but queue tagging is still considered experimental. They are declared in the standard fashion:
<queue>
<name>gcm-henchdist</name>
<mode>pull</mode>
</queue>
The lease tasks function is:
public boolean processBatchOfTasks()
{
List< TaskHandle > tasks = attemptLeaseTasks();
if( null == tasks || tasks.isEmpty() )
{
return false;
}
processLeasedTasks( tasks );
return true;
}
private List< TaskHandle > attemptLeaseTasks()
{
for( int attemptNnum = 1; !LifecycleManager.getInstance().isShuttingDown(); ++attemptNnum )
{
try
{
return m_taskQueue.leaseTasks( m_numLeaseTimeUnits, m_leaseTimeUnit, m_maxTasksPerLease );
} catch( TransientFailureException exc )
{
LOG.warn( "TransientFailureException when leasing tasks from queue '{}'", m_taskQueue.getQueueName(), exc );
ApiProxy.flushLogs();
} catch( ApiDeadlineExceededException exc )
{
LOG.warn( "ApiDeadlineExceededException when when leasing tasks from queue '{}'",
m_taskQueue.getQueueName(), exc );
ApiProxy.flushLogs();
}
if( !backOff( attemptNnum ) )
{
LOG.warn( "Failed to lease tasks." );
break;
}
}
return Collections.emptyList();
}
where the lease variables are 30, TimeUnit.MINUTES, 100 respectively
the processBatchOfTasks function is polled via:
private void startPollingForClient( EClientType clientType )
{
InterimApnsCertificateConfig config = InterimApnsCertificateConfigMgr.getConfig( clientType );
Queue notificationQueue = QueueFactory.getQueue( config.getQueueId().getName() );
ApplePushNotificationWorker worker = new ApplePushNotificationWorker(
notificationQueue,
m_messageConverter.getObjectMapper(),
config.getCertificateBytes(),
config.getPassword(),
config.isProduction() );
LOG.info( "Started worker for {} polling queue {}", clientType, notificationQueue.getQueueName() );
while ( !LifecycleManager.getInstance().isShuttingDown() )
{
boolean tasksProcessed = worker.processBatchOfTasks();
ApiProxy.flushLogs();
if ( !tasksProcessed )
{
// Wait before trying to lease tasks again.
try
{
//LOG.info( "Going to sleep" );
Thread.sleep( MILLISECONDS_TO_WAIT_WHEN_NO_TASKS_LEASED );
//LOG.info( "Waking up" );
} catch ( InterruptedException exc )
{
LOG.info( "Polling loop interrupted. Terminating loop.", exc );
return;
}
}
}
LOG.info( "Instance is shutting down" );
}
and the thread is created via:
Thread thread = ThreadManager.createBackgroundThread( new Runnable()
{
#Override
public void run()
{
startPollingForClient( clientType );
}
} );
thread.start();
GCM notifications are handled in a similar fashion.
Update 2
The following is the backoff function. I have verified in the logs (with both GAE and my own timestamps) that the sleep is incrementing properly
private boolean backOff( int attemptNo )
{
// Exponential back off between 2 seconds and 64 seconds with jitter
// 0..1000 ms.
attemptNo = Math.min( 6, attemptNo );
int backOffTimeInSeconds = 1 << attemptNo;
long backOffTimeInMilliseconds = backOffTimeInSeconds * 1000 + (int)( Math.random() * 1000 );
LOG.info( "Backing off for {} milliseconds from queue '{}'", backOffTimeInMilliseconds, m_taskQueue.getQueueName() );
ApiProxy.flushLogs();
try
{
Thread.sleep( backOffTimeInMilliseconds );
} catch( InterruptedException e )
{
return false;
}
LOG.info( "Waking up from {} milliseconds sleep for queue '{}'", backOffTimeInMilliseconds, m_taskQueue.getQueueName() );
ApiProxy.flushLogs();
return true;
}
Update 3
The tasks are added to the queue within a transaction on a front-end instance:
if( null != queueType )
{
String deviceName;
int numDevices = deviceList.size();
for ( int iDevice = 0; iDevice < numDevices; ++iDevice )
{
deviceName = deviceList.get( iDevice ).getName();
LOG.info( "Queueing Your-Turn notification for user: {} device: {} queue: {}", user.getId(), deviceName, queueType.getName() );
Queue queue = QueueFactory.getQueue( queueType.getName() );
queue.addAsync( TaskOptions.Builder.withMethod( TaskOptions.Method.PULL )
.param( "alertLocKey", "NOTIF_YOUR_TURN" ).param( "device", deviceName ) );
}
}
I know that the transaction succeeds because the database updates correctly.
In the logs I see the "Queuing Your-Turn notification..." entry, but I see nothing appear on the back-end logs.
In the administration panel, I see Task Queue API Calls increment by 1 as well as Task Queue Stored Task Count increment by 1. However, the queue that was written to shows zero in both the Tasks In Queue and Leased In Last Minute fields.
The TransientFailureException JavaDoc says that "The requested operation may succeed if attempted again" (because the failure is transient). Therefore when this exception is thrown your code should loop back and repeat the leaseTasks call. Furthermore AppEngine does not have to redo the request itself because it notified you via the exception that you should do so.
It's a pity you repeat the method name leaseTasks as one of your own because now it's not clear which one I'm referring to when I mention leaseTasks. Still, wrap the inner call to m_taskQueue.leaseTasks in a while loop and an additional try block to catch only the TransientFailureException. Use a flag to end the while loop only if that exception is not thrown.
Is that enough explanation, or do you need a complete source code listing?
It appears that the culprit may have been that I was calling addAsync when enqueuing the task instead of just calling add.
I replaced the call and things seem to be consistently working now. I would like to know why this makes a difference and will update the answer when I find the reason.

Why am i getting "Trigger's related Job's name cannot be null" error in quartz

I'm getting this error, even though i'm specifying a name, group and description for my job, and in the debugger i can see values for all these fields in the detail variable.
JobDetail detail = getDetail();
Trigger newTrigger = getTrigger( detail );
_quartzScheduler.scheduleJob( newTrigger );
JobDetail getDetail()
{
JobBuilder jb = JobBuilder.newJob( LoadScheduleJob.class );
jb = jb.withIdentity( JOB_LABEL +"Fred", "Group" );
jb = jb.withDescription( "DD" );
jb = jb.usingJobData( SCHEDULEID_MAP_KEY, Integer.valueOf( 22 ) );
return jb.build();
}
Trigger getTrigger( JobDetail job ) throws ParseException
{
CronTriggerImpl t = new CronTriggerImpl();// TriggerBuilder.newTrigger().forJob( job ).
t.setName( TRIGGER_LABEL + 22 );
t.setGroup( "GroupJob" );
t.setCronExpression( "1/7 * * ? * *" );
return t;
}
I believe Job and JobDetail are synonymous...Is that correct?
Never mind. I see i never got around to assigning the trigger's job

Running two jobs with Quartz in Java

I have Quartz coded as follows and the first job runs perfectly:
JobDetail jd = null;
CronTrigger ct = null;
jd = new JobDetail("Job1", "Group1", Job1.class);
ct = new CronTrigger("cronTrigger1","Group1","0/5 * * * * ?");
scheduler.scheduleJob(jd, ct);
jd = new JobDetail("Job2", "Group2", Job2.class);
ct = new CronTrigger("cronTrigger2","Group2","0/20 * * * * ?");
scheduler.scheduleJob(jd, ct);
But I'm finding that Job2, which is a completely separate job to Job1, will not execute.
The scheduler is started using a listener in Java. I've also tried using scheduler.addJob(jd, true); but nothing changes. I'm running Java through a JVM on windows 7.
How do you know the job does not run? If you substitute Job1.class for Job2.class, does it still fail? When you swap order in which they're added to scheduler, or only leave Job2? Or if you strip down Job2 to only print a message to console?
I suspect Job2 execution dies with an exception.

Categories