I have two tomee instances clustered.
Each one have a method annotated like
#Schedule(dayOfWeek = "*")
public void runMeDaily() {...}
I'd like to run this method only once a day. Not twice a day (one on each instance)
I could use a flag as described here Run #Scheduled task only on one WebLogic cluster node? or just elect some node, but I wonder if there's a more elegant way to do that.
This question is somewhat related to EJB3.1 #Schedule in clustered environment but I am not using JBOSS. (and it's not answered)
Im using same approach as in other thread - checking that particular host is the correct one to run job. But..
Im not very info ee tools, but in spring you can use profiles for that. Probably you can find similar solution for your needs. Take a look at http://spring.io/blog/2011/06/21/spring-3-1-m2-testing-with-configuration-classes-and-profiles
You can define two seperate beans:
#Configuration
#Profile("dev")
public class StandaloneDataConfig {
#Bean
public DataSource dataSource() {
return new EmbeddedDatabaseBuilder()
.setType(EmbeddedDatabaseType.HSQL)
.addScript("classpath:com/bank/config/sql/schema.sql")
.addScript("classpath:com/bank/config/sql/test-data.sql")
.build();
}
}
#Configuration
#Profile("production")
public class JndiDataConfig {
#Bean
public DataSource dataSource() throws Exception {
Context ctx = new InitialContext();
return (DataSource) ctx.lookup("java:comp/env/jdbc/datasource");
}
}
and decide which one to turn on by switching profile. So your class with method annotated #Scheduled would be loaded only for specific profile. Ofcourse then you need to configure your app to turn on profile on of the nodes only. In spring app it would be as simple as passing -Dspring.profiles.active=profile to one of them.
I could only solve this using a non-Java EE solution, specific to the platform (proprietary). In my case, I am using TomEE+ and Quartz. Running Quartz in the clustered mode (org.quartz.jobStore.isClustered = true) and persisting the timers in a single database forces Quartz to choose an instance to trigger the timer, so it will only run once.
This link was very useful -- http://rmannibucau.wordpress.com/2012/08/22/tomee-quartz-configuration-for-scheduled-methods/
It's a shame Java EE does not specify a behavior for that. (yet, I hope) :-)
I solved this problem by making one of the box as master. basically set an environment variable on one of the box like master=true.
and read it in your java code through system.getenv("master"). if its present and its true then run your code.
basic snippet
#schedule()
void process(){
boolean master=Boolean.parseBoolean(system.getenv("master"));
if(master)
{
//your logic
}
}
Related
I want to start a selftest after spring-integration is started. My first approach was to start it after the setup of the integration flow:
#Configuration
#EnableIntegration
#EnableIntegrationManagement
#IntegrationComponentScan
public class FlowConfig {
...
#PostConstruct
public void startSelfTest() {
SelfTest selfTest = new SelfTest(rezeptConfig, dataSource, archiveClient);
selfTest.run();
}
...
}
This does not work because when the test was started the tables in the database were missing because liquibase was not yet started. I guess the liquibase scripts would be started after initialization.
Any ideas what is the best place to start a selftest?
Well, the best practice to do low-level resources interaction is when everything is initialized in the application context already. And that is the phase when beans are started according their SmartLifecycle implementation.
So, what I suggest to revise your solution to be done from some SmartLifecycle.start().
That's exactly what we do everywhere around in Spring Integration.
(Be sure that we talk exactly about the same Spring Integration: https://spring.io/projects/spring-integration)
See more info in docs: https://docs.spring.io/spring-framework/docs/current/spring-framework-reference/core.html#beans-factory-lifecycle-processor
just guessing, what about onApplicationEvent event in ApplicationListener ? This is called when Spring is initialized and ready.
E.g. check this one How to add a hook to the application context initialization event?
The Liquibase bean which is responsible to create and update the DB tables are started after my Selftest. One solution is to use #DependsOn togehther with the #Bean annotation:
#Bean
#DependsOn("liquibase")
public SelfTest startSelfTest() {
SelfTest selfTest = new SelfTest(rezeptConfig, dataSource, archiveClient);
selfTest.run();
return selfTest;
}
Now the Selftest is started after Liquibase.
I've created a Spring Boot application with an embedded, file-based HSQL database. The data file being created is getting fairly large, especially given the usage model, so I'm wondering if there is a way it can be compacted? Either manually or automatically?
The HSQL documentation indicates that there is a SHUTDOWN COMPACT command (which might take a while, according to the documentation), but I can't figure out how to configure Spring Boot to use it.
I'm open to forcing a SHUTDOWN COMPACT when shutting down the Spring Boot application (if that's the only option), or finding a way to issue a manual "compact" command (if HSQLDB supports such a command), or any other suggestions folks might have.
As mentioned, previous answer is one option, but the suggestion by boly38 is cleaner, and became my preferred approach.
#PreDestroy
public void preDestroy() {
try {
jdbcTemplate.execute("SHUTDOWN COMPACT");
} catch (DataAccessException e) {
// do nothing
}
}
This allows the application to control when the DB is compressed, and removes a potentially bad option from the end-user's hand.
Not sure if this is the best solution, but I did manage to find a solution. I added a REST API that can manually execute a CHECKPOINT DEFRAG command on the database.
In the main Spring Boot application class, I added a method for getting a JdbcTemplate like this:
#Bean
public JdbcTemplate jdbcTemplate(DataSource dataSource) {
return new JdbcTemplate(dataSource);
}
I then decided to create a new REST controller (rather than use an existing one), to provide an API to manually compact the DB:
#RestController
#RequestMapping("/admintools")
public class TEVAdminToolsController {
#Autowired
private JdbcTemplate jdbcTemplate;
#GetMapping("/compressDB")
public Boolean compressDB() {
try {
jdbcTemplate.execute("CHECKPOINT DEFRAG");
} catch (DataAccessException e) {
return false;
}
return true;
}
}
This isn't great from a security perspective; for my use case that isn't a concern but for others it's probably a non-starter.
The two main points in this:
The #Bean for getting a JdbcTemplate
The call to jdbcTemplate.execute() for executing the "raw" SQL command
I have an application with some externalized configuration in the form of properties. I would like the application to react to a change of such properties without a restart or full context refresh.
I am not clear what my options are.
Artificial example: the application implements a service that receives requests and decides whether to queue or reject them. The maximum size of the queue is a property.
final int queueMaxSize = queueProperties.getMaxSize();
if (queue.size() >= queueMaxSize) { //reject }
Where QueueProperties is a class annotated with #ConfigurationProperties.
#ConfigurationProperties(prefix = "myapp.limits.queue")
#Getter
#Setter
public class QueueProperties {
public int maxSize = 10;
}
This works as far as allowing me to control behavior via system properties, profiles, etc.
However, I would like to be able to change this value without releasing/deploying/restarting/refreshing the application.
My application already uses Archaius.
If I push a new value for this property using our internal infrastructure, i can see the application Spring Environment does receive the new value.
(e.g., /admin/env reflects the value and changes dynamically).
The part I'm not clear on is: how to make my service react to the change of value in the environment?
I found two ways, but they seem hairy, I wonder if there are better options. I expected this to be a common problem with a first class solution in the Spring ecosystem.
Hacky solution #1:
#Bean
#Scope("prototype")
#ConfigurationProperties(prefix = "myapp.limits.queue")
QueueProperties queueProperties() {
return new QueueProperties();
}
And inject this into the service using the properties as Provider<QueueProperties> and use it as queuePropertiesProvider.get().getMaxSize().
This works but has a few side-effects I'm not a fan of:
ConfigurationProperties annotation moved from the class to the bean definition
A new QueueProperties object is created and bound to values for every request coming in
Provider might throw on get()
Invalid values are not detected until the first request comes in
Hacky solution #2:
Don't annotate my properties class with ConfigurationProperties, inject the Spring environment at construction time. Implement the getters as such:
int getMaxSize() {
return environment.getProperty("myapp.limits.queue", 10);
}
This also works ok in terms of behavior. However
- This is not annotated as property (unlike the rest of the properties classes in this large project, makes it harder to find)
- This class does not show up in /admin/configprops
Hacky solution #3:
Schedule a recurring task that uses Environment to update my singleton QueueProperties bean.
Any further ideas/suggestions/pointers?
Is there a canonical/recommended way to do this that does not have the shortcoming of my solutions above?
I have an application that uses Spring Batch to define a preset number of jobs, which are currently all defined in the XML.
We add more jobs over time which requires updating the XML, however these jobs are always based on the same parent and can easily be predetermined using a simple SQL query.
So I've been trying to switch to use some combination of XML configuration and Java-based configuration but am quickly getting confused.
Even though we have many jobs, each job definition falls into essentially one of two categories. All of the jobs inherit from one or the other parent job and are effectively identical, besides having different names. The job name is used in the process to select different data from the database.
I've come up with some code much like the following but have run into problems getting it to work.
Full disclaimer that I'm also not entirely sure I'm going about this in the right way. More on that in a second; first, the code:
#Configuration
#EnableBatchProcessing
public class DynamicJobConfigurer extends DefaultBatchConfigurer implements InitializingBean {
#Autowired
private JobBuilderFactory jobBuilderFactory;
#Autowired
private JobRegistry jobRegistry;
#Autowired
private DataSource dataSource;
#Autowired
private CustomJobDefinitionService customJobDefinitionService;
private Flow injectedFlow1;
private Flow injectedFlow2;
public void setupJobs() throws DuplicateJobException {
List<JobDefinition> jobDefinitions = customJobDefinitionService.getAllJobDefinitions();
for (JobDefinition jobDefinition : jobDefinitions) {
Job job = null;
if (jobDefinition.getType() == 1) {
job = jobBuilderFactory.get(jobDefinition.getName())
.start(injectedFlow1).build()
.build();
} else if (jobDefinition.getType() == 2) {
job = jobBuilderFactory.get(jobDefinition.getName())
.start(injectedFlow2).build()
.build();
}
if (job != null) {
jobRegistry.register(new ReferenceJobFactory(job));
}
}
}
#Override
public void afterPropertiesSet() throws Exception {
setupJobs();
}
public void setInjectedFlow1(Flow injectedFlow1) {
this.injectedFlow1 = injectedFlow1;
}
public void setInjectedFlow2(Flow injectedFlow2) {
this.injectedFlow2 = injectedFlow2;
}
}
I have the flows that get injected defined in the XML, much like this:
<batch:flow id="injectedFlow1">
<batch:step id="InjectedFlow1.Step1" next="InjectedFlow1.Step2">
<batch:flow parent="InjectedFlow.Step1" />
</batch:step>
<batch:step id="InjectedFlow1.Step2">
<batch:flow parent="InjectedFlow.Step2" />
</batch:step>
</batch:flow>
So as you can see, I'm effectively kicking off the setupJobs() method (which is intended to dynamically create these job definitions) from the afterPropertiesSet() method of InitializingBean. I'm not sure that's right. It is running, but I'm not sure if there's a different entry point that's better intended for this purpose. Also I'm not sure what the point of the #Configuration annotation is to be honest.
The problem I'm currently running into is as soon as I call register() from JobRegistry, it throws the following IllegalStateException:
To use the default BatchConfigurer the context must contain no more than one DataSource, found 2.
Note: my project actually has two data sources defined. The first is the default dataSource bean which connects to the database that Spring Batch uses. The second data source is an external database, and this second one contains all the information I need to define my list of jobs. But the main one does use the default name "dataSource" so I'm not quite sure how else I can tell it to use that one.
First of all - I don't recommend using a combination of XML as well as Java Configuration. Use only one, preferably Java one as its not much of an effort to convert XML config to Java config. (Unless you have some very good reasons to do it - that you haven't explained)
I haven't used Spring Batch alone as I have always used it with Spring Boot and I have a project where I have defined multiple jobs and it always worked well for similar code that you have shown.
For your issue, there are some answers on SO like this OR this which are basically trying to say that you need to write your own BatchConfigurer and not rely on default one.
Now coming to solution using Spring Boot
With Spring Boot, You should try segregate job definitions and job executions.
You should first try to just define jobs and initialize Spring context without enabling jobs (spring.batch.job.enabled=false)
In your Spring Boot main method, when you start app with something like - SpringApplication.run(Application.class, args); ...you will get ApplicationContext ctx
Now you can get your relevant beans from this context & launch specif jobs by getting names from property or command line etc & using JobLauncher.run(...) method.
You can refer my this answer if willing to order job executions. You can also write job schedulers using Java.
Point being, you separate your job building / bean configurations & job execution concerns.
Challenge
Keeping multiple jobs in a single project can be challenging when you try to have different settings for each job as application.properties file is environment specific and not job specific i.e. spring boot properties will apply to all jobs.
In my particular case, the solution was to actually eliminate the #Configuration
and #EnableBatchProcessing annotations from my class above. Something about these caused it to try and use the DefaultBatchConfigurer which fails when you have more than one data source defined (even if you've identified them clearly with "dataSource" as the primary and some other name for the secondary).
The #Configuration class in particular wasn't necessary because all it really does is lets your class get auto-instantiated without having to define it as a bean in the app context. But since I was doing that anyway this one was superfluous.
One of the downsides of removing #EnableBatchProcessing was that I could no longer auto-wire the JobBuilderFactory bean. So instead I just had to do to create it:
JobRepositoryFactoryBean factory = new JobRepositoryFactoryBean();
factory.setDataSource(dataSource);
factory.setTransactionManager(transactionManager);
factory.afterPropertiesSet();
jobRepository = factory.getObject();
jobBuilderFactory = new JobBuilderFactory(jobRepository);
Then it seems I was on the right track already by using jobRegistry.register(...) to define my jobs. So essentially once I removed those annotations above everything started working. I'm going to mark Sabir's answer as the correct one however because it helped me out.
Right now I'm exposing the service layer of my application using spring remoting's RMI/SOAP/JMS/Hessian/Burlap/HttpInvoker exporters. What I'd like is to allow the user to somehow define which of these remoting mechanisms they'd like enabled (rather than enabling all of them), then only create those exporter beans.
I was hoping that spring's application context xml's had support for putting in conditional blocks around portions of the xml. However, from what I've seen so far there's nothing in the standard spring distribution that allows you to do something like this.
Are there any other ways to achieve what I'm trying to do?
I am going to assume that you are looking to configure your application based on your environment, as in... for production I want to use this beans, in dev these other ...
As Ralph is saying, since Spring 3.1 you have profiles... But the key, is that you understand that you should put your environment based beans in different configuration files... so you could have something like dev-beans.xml, prod-beans.xml... Then in your main spring file, then you just invoke the appropriate one based on the environment that you are using... So profiles are only technique to do so... But you can also use other techniques, like have a system environmental variable, or pass a parameter in your build to decide which beans you want to use
You could realize this by using a Spring #Configuration bean, so you can construct your beans in java code. (see http://static.springsource.org/spring/docs/3.1.x/spring-framework-reference/html/beans.html#beans-java)
#Configuration
public class AppConfig {
#Bean
public MyService myService() {
if ( userSettingIshessian ) {
return new HessianExporter();
}else {
return new BurlapExporter();
}
}
}
Of course you need to get the user setting from somewhere, a system parameter would be easy, or config file, or something else.
Spring 3.1 has the concept of Profiles. My you can use them.