I want to start a selftest after spring-integration is started. My first approach was to start it after the setup of the integration flow:
#Configuration
#EnableIntegration
#EnableIntegrationManagement
#IntegrationComponentScan
public class FlowConfig {
...
#PostConstruct
public void startSelfTest() {
SelfTest selfTest = new SelfTest(rezeptConfig, dataSource, archiveClient);
selfTest.run();
}
...
}
This does not work because when the test was started the tables in the database were missing because liquibase was not yet started. I guess the liquibase scripts would be started after initialization.
Any ideas what is the best place to start a selftest?
Well, the best practice to do low-level resources interaction is when everything is initialized in the application context already. And that is the phase when beans are started according their SmartLifecycle implementation.
So, what I suggest to revise your solution to be done from some SmartLifecycle.start().
That's exactly what we do everywhere around in Spring Integration.
(Be sure that we talk exactly about the same Spring Integration: https://spring.io/projects/spring-integration)
See more info in docs: https://docs.spring.io/spring-framework/docs/current/spring-framework-reference/core.html#beans-factory-lifecycle-processor
just guessing, what about onApplicationEvent event in ApplicationListener ? This is called when Spring is initialized and ready.
E.g. check this one How to add a hook to the application context initialization event?
The Liquibase bean which is responsible to create and update the DB tables are started after my Selftest. One solution is to use #DependsOn togehther with the #Bean annotation:
#Bean
#DependsOn("liquibase")
public SelfTest startSelfTest() {
SelfTest selfTest = new SelfTest(rezeptConfig, dataSource, archiveClient);
selfTest.run();
return selfTest;
}
Now the Selftest is started after Liquibase.
Related
In my project, I have a lot of spring managed components that does about the same thing. I want to create a common Util class that does all the common operations for all my components. Since this Util class needs to access environment variables and beans it's instantiated like this:
// Util class:
public class FooUtil {
public FooUtil(Environment env) {
env.getProperty("FOO_TOPIC", "foo")
}
}
// Example configuration for one of my components:
#Configuration
public class ComponentConfig {
#Bean
FooUtil fooUtil(Environment env) {
return new FooUtil(env);
}
}
This allows FooUtil to access all environment variables and beans without itself being a component.
Now, this Util class also need to listen to kafka topics. Each component currently has a listener set up like this:
#KafkaListener(topics = "${FOO"_TOPIC:foo2}", containerFactory = "kafkaListenerContainerFactory")
private void fooListener(ConsumerRecord<String, Foo> rec) {
// Stuff...
}
I want to move this kafka listener into FooUtil. How can I do this? To be clear, I want FooUtil to start listening as soon as it's instantiated and initialized by a component.
Since the FooUtil is not managed by Spring you're unable to use the #KafkaListener annotation. If FooUtil was a bean managed by Spring it would be picked up by Spring and the listener annotation will cause Spring to connect the listener. All of this is done by Spring in KafkaListenerAnnotationBeanPostProcessor I believe.
Does the FooUtil have to be an unmanaged bean? I might be missing some details but from the question I can't see why it shouldn't be possible. If you need different instances for every bean using it you can use #Scope("prototype") on the FooUtil.
Turns out, you can make a kafka listener without using the #KafkaListener annotation (thanks Gary Russell). Just follow the instructions here (douevencode.com) for instructions of how to do it.
I have an application that uses Spring Batch to define a preset number of jobs, which are currently all defined in the XML.
We add more jobs over time which requires updating the XML, however these jobs are always based on the same parent and can easily be predetermined using a simple SQL query.
So I've been trying to switch to use some combination of XML configuration and Java-based configuration but am quickly getting confused.
Even though we have many jobs, each job definition falls into essentially one of two categories. All of the jobs inherit from one or the other parent job and are effectively identical, besides having different names. The job name is used in the process to select different data from the database.
I've come up with some code much like the following but have run into problems getting it to work.
Full disclaimer that I'm also not entirely sure I'm going about this in the right way. More on that in a second; first, the code:
#Configuration
#EnableBatchProcessing
public class DynamicJobConfigurer extends DefaultBatchConfigurer implements InitializingBean {
#Autowired
private JobBuilderFactory jobBuilderFactory;
#Autowired
private JobRegistry jobRegistry;
#Autowired
private DataSource dataSource;
#Autowired
private CustomJobDefinitionService customJobDefinitionService;
private Flow injectedFlow1;
private Flow injectedFlow2;
public void setupJobs() throws DuplicateJobException {
List<JobDefinition> jobDefinitions = customJobDefinitionService.getAllJobDefinitions();
for (JobDefinition jobDefinition : jobDefinitions) {
Job job = null;
if (jobDefinition.getType() == 1) {
job = jobBuilderFactory.get(jobDefinition.getName())
.start(injectedFlow1).build()
.build();
} else if (jobDefinition.getType() == 2) {
job = jobBuilderFactory.get(jobDefinition.getName())
.start(injectedFlow2).build()
.build();
}
if (job != null) {
jobRegistry.register(new ReferenceJobFactory(job));
}
}
}
#Override
public void afterPropertiesSet() throws Exception {
setupJobs();
}
public void setInjectedFlow1(Flow injectedFlow1) {
this.injectedFlow1 = injectedFlow1;
}
public void setInjectedFlow2(Flow injectedFlow2) {
this.injectedFlow2 = injectedFlow2;
}
}
I have the flows that get injected defined in the XML, much like this:
<batch:flow id="injectedFlow1">
<batch:step id="InjectedFlow1.Step1" next="InjectedFlow1.Step2">
<batch:flow parent="InjectedFlow.Step1" />
</batch:step>
<batch:step id="InjectedFlow1.Step2">
<batch:flow parent="InjectedFlow.Step2" />
</batch:step>
</batch:flow>
So as you can see, I'm effectively kicking off the setupJobs() method (which is intended to dynamically create these job definitions) from the afterPropertiesSet() method of InitializingBean. I'm not sure that's right. It is running, but I'm not sure if there's a different entry point that's better intended for this purpose. Also I'm not sure what the point of the #Configuration annotation is to be honest.
The problem I'm currently running into is as soon as I call register() from JobRegistry, it throws the following IllegalStateException:
To use the default BatchConfigurer the context must contain no more than one DataSource, found 2.
Note: my project actually has two data sources defined. The first is the default dataSource bean which connects to the database that Spring Batch uses. The second data source is an external database, and this second one contains all the information I need to define my list of jobs. But the main one does use the default name "dataSource" so I'm not quite sure how else I can tell it to use that one.
First of all - I don't recommend using a combination of XML as well as Java Configuration. Use only one, preferably Java one as its not much of an effort to convert XML config to Java config. (Unless you have some very good reasons to do it - that you haven't explained)
I haven't used Spring Batch alone as I have always used it with Spring Boot and I have a project where I have defined multiple jobs and it always worked well for similar code that you have shown.
For your issue, there are some answers on SO like this OR this which are basically trying to say that you need to write your own BatchConfigurer and not rely on default one.
Now coming to solution using Spring Boot
With Spring Boot, You should try segregate job definitions and job executions.
You should first try to just define jobs and initialize Spring context without enabling jobs (spring.batch.job.enabled=false)
In your Spring Boot main method, when you start app with something like - SpringApplication.run(Application.class, args); ...you will get ApplicationContext ctx
Now you can get your relevant beans from this context & launch specif jobs by getting names from property or command line etc & using JobLauncher.run(...) method.
You can refer my this answer if willing to order job executions. You can also write job schedulers using Java.
Point being, you separate your job building / bean configurations & job execution concerns.
Challenge
Keeping multiple jobs in a single project can be challenging when you try to have different settings for each job as application.properties file is environment specific and not job specific i.e. spring boot properties will apply to all jobs.
In my particular case, the solution was to actually eliminate the #Configuration
and #EnableBatchProcessing annotations from my class above. Something about these caused it to try and use the DefaultBatchConfigurer which fails when you have more than one data source defined (even if you've identified them clearly with "dataSource" as the primary and some other name for the secondary).
The #Configuration class in particular wasn't necessary because all it really does is lets your class get auto-instantiated without having to define it as a bean in the app context. But since I was doing that anyway this one was superfluous.
One of the downsides of removing #EnableBatchProcessing was that I could no longer auto-wire the JobBuilderFactory bean. So instead I just had to do to create it:
JobRepositoryFactoryBean factory = new JobRepositoryFactoryBean();
factory.setDataSource(dataSource);
factory.setTransactionManager(transactionManager);
factory.afterPropertiesSet();
jobRepository = factory.getObject();
jobBuilderFactory = new JobBuilderFactory(jobRepository);
Then it seems I was on the right track already by using jobRegistry.register(...) to define my jobs. So essentially once I removed those annotations above everything started working. I'm going to mark Sabir's answer as the correct one however because it helped me out.
We're using Spring 4.x, Hibernate 5.x, Spring-Data 1.11 and we currently have a SQL interceptor that extends org.hiberate.EmptyInterceptor and we basically manually hook that up at the start of the web request using HibernateInterceptor.setInterceptor. We also have jobs that run in the background via Spring task scheduler. These start their own transactions that obviously don't get the interceptor attached to them. What I'm trying to do at this point is to find a way to intercept Spring's #Transactional in all cases.
I've looked into TransactionInterceptor, and #TransactionalEventListener and so far haven't gotten any of them to work, and it's hard to figure out what is currently considered best practice with Spring.
So basically the problem we're trying to solve is that a the end of a transaction we need to know if it failed or succeeded.
So what is the current best practice with Spring to always get pre/post commit events so we can respond as needed?
edit
Realized that the #TransactionalEventListener wouldn't work as we're not using Spring events so it was just a misunderstanding on my part of what that really did.
One way you could do it is to create a simple aspect, something like:
#Aspect
#Component
public class AfterTransactionalAspect {
#After("#annotation(Transactional)")
public void cleanupAfterTransaction(JoinPoint joinPoint) throws Throwable {
// ... Do cleanup work here
}
Another good way to go (if you're using Spring-destined events) would be to use the #TransactionalEventListener.
Are you sure you're using an ApplicationEventPublisher to publish the events? Do you for sure have #EnableTransactionManagement on your config, #TransactionalEventListener is on a public method, TransactionTemplate is set, using #Transactional on the method the publish event occurs?
I have two tomee instances clustered.
Each one have a method annotated like
#Schedule(dayOfWeek = "*")
public void runMeDaily() {...}
I'd like to run this method only once a day. Not twice a day (one on each instance)
I could use a flag as described here Run #Scheduled task only on one WebLogic cluster node? or just elect some node, but I wonder if there's a more elegant way to do that.
This question is somewhat related to EJB3.1 #Schedule in clustered environment but I am not using JBOSS. (and it's not answered)
Im using same approach as in other thread - checking that particular host is the correct one to run job. But..
Im not very info ee tools, but in spring you can use profiles for that. Probably you can find similar solution for your needs. Take a look at http://spring.io/blog/2011/06/21/spring-3-1-m2-testing-with-configuration-classes-and-profiles
You can define two seperate beans:
#Configuration
#Profile("dev")
public class StandaloneDataConfig {
#Bean
public DataSource dataSource() {
return new EmbeddedDatabaseBuilder()
.setType(EmbeddedDatabaseType.HSQL)
.addScript("classpath:com/bank/config/sql/schema.sql")
.addScript("classpath:com/bank/config/sql/test-data.sql")
.build();
}
}
#Configuration
#Profile("production")
public class JndiDataConfig {
#Bean
public DataSource dataSource() throws Exception {
Context ctx = new InitialContext();
return (DataSource) ctx.lookup("java:comp/env/jdbc/datasource");
}
}
and decide which one to turn on by switching profile. So your class with method annotated #Scheduled would be loaded only for specific profile. Ofcourse then you need to configure your app to turn on profile on of the nodes only. In spring app it would be as simple as passing -Dspring.profiles.active=profile to one of them.
I could only solve this using a non-Java EE solution, specific to the platform (proprietary). In my case, I am using TomEE+ and Quartz. Running Quartz in the clustered mode (org.quartz.jobStore.isClustered = true) and persisting the timers in a single database forces Quartz to choose an instance to trigger the timer, so it will only run once.
This link was very useful -- http://rmannibucau.wordpress.com/2012/08/22/tomee-quartz-configuration-for-scheduled-methods/
It's a shame Java EE does not specify a behavior for that. (yet, I hope) :-)
I solved this problem by making one of the box as master. basically set an environment variable on one of the box like master=true.
and read it in your java code through system.getenv("master"). if its present and its true then run your code.
basic snippet
#schedule()
void process(){
boolean master=Boolean.parseBoolean(system.getenv("master"));
if(master)
{
//your logic
}
}
I am currently migrating to EJB3.1 after using Spring for many years. One thing I would like to implement in EJB, for which I couldn't find a matching pattern yet is my MigrationManager.
In Spring I had a bean that dealt with database schema and data migration. For this I implemented a Spring BeanFactoryPostProcessor because this way I had the database connection injected, but the JPA system is not yet initialized. So I could perform all migration steps and then have the application finishing starting.
How can I do something like this in EJB3.1 (Using CDI ... if this is of importance)
Chris
This is the way to run some initialization code from an EJB:
#Singleton
#Startup
public class MigrationManager {
#PostConstruct
public void migrate() {
// do work
}
}
You don't need a separate app for that (as suggested in a comment above).
EntityManagers get instantiated lazily, so as long as you don't inject an EntityManager into some other startup code, this should give you a chance to update your database schema before an EntityManager is actually hitting the database.
By the way, for database schema migration I'd recommend Liquibase, which can be triggered by a ServletContextListener.