Start Quartz Scheduler without firing the triggers - java

I have got q requirement where I need to start the scheduler but the existing triggers should not run. I have spring boot application, I want to implement a case where I will start the application but it should not run the existing triggers into the system.
It can be any flag or parameter base. Is there any way to achieve this? I have been checking on web related to this but could not find anything related to this.

Possibly what you can do, create the scheduler instance, but do not set autostart to true. Then application will be started, but no triggers will be executed.
The code fragment is on kotlin, but it should be similar on java
#Bean
open fun quartzScheduler(springBeanJobFactory: SpringBeanJobFactory): SchedulerFactoryBean {
val quartzSchedulerFactory = SchedulerFactoryBean()
quartzSchedulerFactory.setJobFactory(springBeanJobFactory)
quartzSchedulerFactory.setConfigLocation(ClassPathResource("quartz.properties"))
quartzSchedulerFactory.setOverwriteExistingJobs(true)
quartzSchedulerFactory.isAutoStartup = false
return quartzSchedulerFactory
}
Then you can create some simple controller to start the scheduler
schedulerFactory.scheduler.start()

Variation written in Java and using application properties
#Bean
#Inject
SchedulerFactoryBeanCustomizer schedulerCustomizer(
#Value("${spring.quartz.properties.org.quartz.scheduler.instanceName:schedulerFactoryBean}") String schedulerName,
#Value("${spring.quartz.properties.org.quartz.scheduler.enabled:true}") boolean enabled) {
return (schedulerFactoryBean) -> {
schedulerFactoryBean.setOverwriteExistingJobs(true);
schedulerFactoryBean.setBeanName(schedulerName);
schedulerFactoryBean.setAutoStartup(enabled);
};
}

Related

Micronaut, db-scheduler: No current transaction present. Consider declaring #Transactional on the surrounding method

I'm trying to use db-scheduler with Micronaut. Therefore, I created a #Singleton service where I inject the actual DataSource which is of type TransactionAwareDataSource. I then call a certain method to setup the scheduler which is something like:
#Transactional
public void createJob() {
RecurringTask<Void> hourlyTask = Tasks.recurring("my-hourly-task", FixedDelay.ofHours(1))
.execute((inst, ctx) -> {
System.out.println("Executed!");
});
final Scheduler scheduler = Scheduler
.create(dataSource)
.startTasks(hourlyTask)
.threads(5)
.build();
scheduler.start();
}
which, at "create" throws this exception:
io.micronaut.transaction.exceptions.NoTransactionException: No current transaction present. Consider declaring #Transactional on the surrounding method
at io.micronaut.transaction.jdbc.TransactionalConnectionInterceptor.intercept(TransactionalConnectionInterceptor.java:65)
at io.micronaut.aop.chain.MethodInterceptorChain.proceed(MethodInterceptorChain.java:96)
at io.micronaut.transaction.jdbc.TransactionalConnection$Intercepted.getMetaData(Unknown Source)
at com.github.kagkarlsson.scheduler.jdbc.AutodetectJdbcCustomization.<init>(AutodetectJdbcCustomization.java:40)
at com.github.kagkarlsson.scheduler.SchedulerBuilder.lambda$build$0(SchedulerBuilder.java:190)
at java.base/java.util.Optional.orElseGet(Optional.java:369)
Everywhere else in my app everything is working like it should, means, I can read and write to the DB via the repositories and #Transactional is working as well.
I'm not 100% sure where the problem is, but I guess it does have to do with placing the annotation. Which - in this case - is nothing I can really change. On the other hand, if I create the datasource manually, effectively bypassing micronaut, it's working.
BTW: the exception comes up within db-scheduler where the first call to the DB is made (c.getMetaData().getDatabaseProductName()).
Micronaut-Version: 2.3.4, Micronaut-Data: 2.2.4, everything setup properly.
Do you guys have any ideas how to solve this problem? Or is it even a bug?
Thanks!
So the problem is that Micronaut Data wraps the DataSource into a TransactionAwareDataSource, as you mentioned. Your library db-scheduler or mine JobRunr picks it up, and operates without the required annotations. The solution is to unwrap it before giving it to the db-scheduler or JobRunr:
Kotlin:
val unwrappedDataSource = (dataSource as DelegatingDataSource).targetDataSource
Java:
DataSource unwrappedDataSource = ((DelegatingDataSource) dataSource).targetDataSource

How do you separate roles in Spring Boot ? (Web vs Scheduler, etc..)

I am coming from a (mostly) Python Django/Celery background and starting with Spring Boot.
I am having a hard time to understand how do you separate the roles.
For example when having a Django/Celery project, I will have one one side the web backends started as gunicorn, and on the other side the workers started as celery (so, different commands, but pointing at the same code).
But on Spring Boot, you only have a single entrypoint, and as soon as the scheduler is injected, the jobs will start being processed.
What is the correct way do separate those like in a Django/Celery application ?
Should I put almost all my code as a library and then create 2 final applications, one that will setup the #DispatcherServlet and another one that will setup #EnableScheduling, or is there some kind of configuration to be injected at runtime ?
In my opinion, if 'the web' and 'the scheduler' are both the important function in the application, then we don't need to separate them as long as you are creating a monolithic application.
Because you are using spring boot, so #DispatcherServlet and all the other web component that a web application needed will be injected and configured automatically. The only thing you have to do is creating some class that annotated with #Controller or #RestController and set up the #RequestMapping methods inside those class.
How about Scheduler? you need to add #EnableScheduling in one of the #Configuration class first, then create Scheduler class in scheduler package like below code sample.
You can use cron property to set up the specify execute time just like Linux crontab. The jobs will start being processed only if the cron time is up.
#Component
public class PlatformScheduler {
#Autowired
private BatchService batchService;
#Scheduled(cron = "0 0 12 * * *")
public void dailyInitialize() {
clearCompletedBatches();
queryBatchesToRunToday();
}
#Scheduled(fixedRate = 10000, initialDelay = 10000)
private void harvestCompletedBatches() {
batchService.harvestCompletedBatches();
}
#Scheduled(fixedRate = 10000, initialDelay = 10000)
private void executeWaitingBatches() {
batchService.executeWaitingBatches(new DateTime());
}
}
The most simple project hierarchy will be like below, 'the web' and 'the scheduler' can be in the same project safely and share the same #Service components without harm.

How to integrate a self-test into Spring-Integration?

I want to start a selftest after spring-integration is started. My first approach was to start it after the setup of the integration flow:
#Configuration
#EnableIntegration
#EnableIntegrationManagement
#IntegrationComponentScan
public class FlowConfig {
...
#PostConstruct
public void startSelfTest() {
SelfTest selfTest = new SelfTest(rezeptConfig, dataSource, archiveClient);
selfTest.run();
}
...
}
This does not work because when the test was started the tables in the database were missing because liquibase was not yet started. I guess the liquibase scripts would be started after initialization.
Any ideas what is the best place to start a selftest?
Well, the best practice to do low-level resources interaction is when everything is initialized in the application context already. And that is the phase when beans are started according their SmartLifecycle implementation.
So, what I suggest to revise your solution to be done from some SmartLifecycle.start().
That's exactly what we do everywhere around in Spring Integration.
(Be sure that we talk exactly about the same Spring Integration: https://spring.io/projects/spring-integration)
See more info in docs: https://docs.spring.io/spring-framework/docs/current/spring-framework-reference/core.html#beans-factory-lifecycle-processor
just guessing, what about onApplicationEvent event in ApplicationListener ? This is called when Spring is initialized and ready.
E.g. check this one How to add a hook to the application context initialization event?
The Liquibase bean which is responsible to create and update the DB tables are started after my Selftest. One solution is to use #DependsOn togehther with the #Bean annotation:
#Bean
#DependsOn("liquibase")
public SelfTest startSelfTest() {
SelfTest selfTest = new SelfTest(rezeptConfig, dataSource, archiveClient);
selfTest.run();
return selfTest;
}
Now the Selftest is started after Liquibase.

Making #Schedule run only once in a clustered environment

I have two tomee instances clustered.
Each one have a method annotated like
#Schedule(dayOfWeek = "*")
public void runMeDaily() {...}
I'd like to run this method only once a day. Not twice a day (one on each instance)
I could use a flag as described here Run #Scheduled task only on one WebLogic cluster node? or just elect some node, but I wonder if there's a more elegant way to do that.
This question is somewhat related to EJB3.1 #Schedule in clustered environment but I am not using JBOSS. (and it's not answered)
Im using same approach as in other thread - checking that particular host is the correct one to run job. But..
Im not very info ee tools, but in spring you can use profiles for that. Probably you can find similar solution for your needs. Take a look at http://spring.io/blog/2011/06/21/spring-3-1-m2-testing-with-configuration-classes-and-profiles
You can define two seperate beans:
#Configuration
#Profile("dev")
public class StandaloneDataConfig {
#Bean
public DataSource dataSource() {
return new EmbeddedDatabaseBuilder()
.setType(EmbeddedDatabaseType.HSQL)
.addScript("classpath:com/bank/config/sql/schema.sql")
.addScript("classpath:com/bank/config/sql/test-data.sql")
.build();
}
}
#Configuration
#Profile("production")
public class JndiDataConfig {
#Bean
public DataSource dataSource() throws Exception {
Context ctx = new InitialContext();
return (DataSource) ctx.lookup("java:comp/env/jdbc/datasource");
}
}
and decide which one to turn on by switching profile. So your class with method annotated #Scheduled would be loaded only for specific profile. Ofcourse then you need to configure your app to turn on profile on of the nodes only. In spring app it would be as simple as passing -Dspring.profiles.active=profile to one of them.
I could only solve this using a non-Java EE solution, specific to the platform (proprietary). In my case, I am using TomEE+ and Quartz. Running Quartz in the clustered mode (org.quartz.jobStore.isClustered = true) and persisting the timers in a single database forces Quartz to choose an instance to trigger the timer, so it will only run once.
This link was very useful -- http://rmannibucau.wordpress.com/2012/08/22/tomee-quartz-configuration-for-scheduled-methods/
It's a shame Java EE does not specify a behavior for that. (yet, I hope) :-)
I solved this problem by making one of the box as master. basically set an environment variable on one of the box like master=true.
and read it in your java code through system.getenv("master"). if its present and its true then run your code.
basic snippet
#schedule()
void process(){
boolean master=Boolean.parseBoolean(system.getenv("master"));
if(master)
{
//your logic
}
}

how to suspend job in quartz scheduler?

Hi i am creating an application that executes the method of a class based on a cron expression. For that i am using spring quartz in which i have to configure all these stuffs in my spring-file it works fine and jobs are executing based on the cron expression, But now i want to suspend the next execution of particular job in java class based on the choice of a user from UI. Then is there any way to do this ??
can i get the details of all running job it the context ? if so then i can filter the jobs and try to suspend that job for next execution.
Inject your SchedulerFactoryBean. Use it's getScheduler method to obtain a quartz Scheduler and use rescheduleJob or other methods from quartz API to perform your task.
I got it work by use of following code
stdScheduler is a scheduleFactoryBean
String groupnames[] = stdScheduler.getJobGroupNames();
for (String groupName : groupnames) {
String[] jobNames = stdScheduler.getJobNames(groupName);
for (String jobName : jobNames) {
if (jobName.equals("jobName")) {
stdScheduler.pauseJob(jobName, groupName);
}
}
}
We can use the stdScheduler.pauseJob(JobKey jobKey) method to reduce the number of loops mentioned in the code above
If you got hold of the SchedulerFactoryBean by injection or somehow else there are the convenience methods:
schedulerFactoryBean.getScheduler().standby();
schedulerFactoryBean.getScheduler().start();
When using Quartz directly also this works:
#Autowired
private Scheduler scheduler;
...
scheduler.standby();
...
scheduler.start();

Categories