Spring #EnableAsync breaks bean initialization order? - java

I wanted to introduce #Async methods (for sending mails in parallel) in my SpringBoot application.
But when I put the #EnableAsync annotation on our application's main #Configuration class (annotated with #SpringBootApplication), the Flyway DB migrations are executed before the DataSourceInitializer (which runs schema.sql and data.sql for my tests) executed.
The first operation involving a 'should-be-migrated' database table fails.
Removing the #EnableAsync puts everything back to normal. Why does this happen and how could I fix this (or work around the issue)?
Update Some more findings: #EnableAsync(mode = AdviceMode.ASPECTJ) keeps the original order of DB setup, but the #Async method runs on the same thread as caller thread then. I also saw that the Bean 'objectPostProcessor' is created early (3rd bean) when #EnableAsync is not present, or #EnableAsync(mode = AdviceMode.ASPECTJ) is used. When only #EnableAsync is used, this bean is created much later.
Update 2 While I wasn't able to create a minimal project which reproduces the problem yet, I found out that the proper DB setup order is restored in my affected application when I comment out the #EnableWebSocketMessageBroker in the following:
#Configuration
#EnableWebSocketMessageBroker
public class WebSocketConfig extends AbstractWebSocketMessageBrokerConfigurer
{
...
}
Bean 'webSocketConfig' is the first bean created (as per INFO-level console output) if #EnableWebSocketMessageBroker is present.

It turned out that having both #EnableAsync and #EnableWebSocketMessageBroker present in my application caused the described effect.
Removing one of it, restored the expected behavior, in which case the DataSourceInitializerPostProcessor created the DataSourceInitializer which triggered execution of schema.sql and data.sql, before flyway migrations took place.
When both annotations were present, the registration of the BeanPostProcessor named internalAsyncAnnotationProcessor happened before the DataSourceInitializerPostProcessor was registered.
The cause of the problem was that the registration of internalAsyncAnnotationProcessor caused the creation of the dataSource bean as a side effect. This side effect was caused by spring looking for a TaskExecutor bean to use, for the #Async method execution. spring unexpectedly picked up the clientInboundChannelExecutor bean which was present because of the #EnableWebSocketMessageBroker. Using this bean caused the instantiation of WebSocketMessagingAutoConfiguration which created the objectMapper bean (for json-serialization) which uses services that use DAO-repositories which depend on dataSource. So all those beans got created.
Because DataSourceInitializerPostProcessor wasn't even registered at that time, DataSourceInitializer was created much later, after the flyway migration took place.
The javadoc for #EnableAsync says the following:
By default, a SimpleAsyncTaskExecutor will be used to process async method invocations. Besides, annotated methods having a void return type cannot transmit any exception back to the caller. By default, such uncaught exceptions are only logged.
I assumed, that a SimpleAsyncTaskExecutor will be created to run the #Async methods, but instead spring picked up an existing bean with a matching type.
So the solution for this issue was to implement AsyncConfigurer, and provide my own Executor. This is also suggested in the javadoc of #EnableAsync:
To customize all this, implement AsyncConfigurer and provide:
* your own Executor through the getAsyncExecutor() method, and
* your own AsyncUncaughtExceptionHandler through the getAsyncUncaughtExceptionHandler() method.
With this tweak the DB setup is again executed as expected.

Related

#ConditionalOnBean(KafkaTemplate.class) crashes entire application

I have a Spring boot application that consumes data from Kafka topic and send email notifications with a data received from Kafka,
#Bean
public EmailService emailService() {
return new EmailServiceImpl(getJavaMailSender());
}
it works perfectly,
but after I added #ConditionalOnBean:
#Bean
#ConditionalOnBean(KafkaTemplate.class)
public EmailService emailService() {
return new EmailServiceImpl(getJavaMailSender());
}
application failed to start:
required a bean of type 'com.acme.EmailService' that could not be
found.
And I can't find any explanation, how it is possible, because KafkaTemplate bean automatically created by Spring in KafkaAutoConfiguration class.
Could you please give me an explanation?
From the documentation:
The condition can only match the bean definitions that have been
processed by the application context so far and, as such, it is
strongly recommended to use this condition on auto-configuration
classes only. If a candidate bean may be created by another
auto-configuration, make sure that the one using this condition runs
after.
This documentation clearly says what might be wrong here. I understand KafkaTemplateConfiguration creates the KafkaTemplate.class. But it may not be added in the bean context while the condition was being checked. Try to use autoconfiguration for KafkaTemplate or make sure the ordering of different configuration classes so that you can have the guarantee of having the KafkaTemplate in bean registry before that conditional check.

Refreshing #Resource and #Autowired bean at Runtime

I have one bean which is define in the SpringConfiguration and which gets initialised at the startup.
This bean(Map) is populated by querying the database at startup.
Now the database gets update frequently and I have implemented the ApplicationListener and was trying to implement a cache using TimerTask.
The code of timertask runs fine and in that i am accessing the bean using ApplicationContext but not able to refresh/reinitialize the bean with the new database results.
The #Resource and #Autowired beans still shows the old value.
I want to refresh/Reinitialize the #Autowired / #Resource bean at Runtime. Please advise
If you are using an ORM it should handle it for you.
Otherwise, if you are doing it on you own, you can annotate the bean with #RefreshScope and when you detect a change (wherever method you are using like cron or listener) then just refresh the context from the actuator like:
http://localhost:8080/actuator/refresh
Cheers!
pd: Actuator should be enabled and accesible.

Error when calling configprops while spring batch jobscope is configured

I recently update my spring boot app 2.1.9 to 2.2.0 and i'm facing a problem. When i'm calling "configprops" from actuator endpoint, an exception is throw :
Scope 'job' is not active for the current thread
I reproduce the bug : https://github.com/guillaumeyan/bugspringbatch (just launch the test). Original project come from https://github.com/spring-guides/gs-batch-processing/tree/master/complete
I tried to add :
#Bean
public StepScope stepScope() {
final StepScope stepScope = new StepScope();
stepScope.setAutoProxy(true);
return stepScope;
}
but it does not work (with spring.main.allow-bean-definition-overriding=true)
Here is my configuration of the spring batch
#Bean
#JobScope
public RepositoryItemReader<DossierEntity> dossierToDiagnosticReader(PagingAndSortingRepository<DossierEntity, Long> dossierJpaRepository, #Value("#{jobParameters[origin]}") String origin) {
RepositoryItemReader<DossierEntity> diagnosticDossierReader = new RepositoryItemReader<>();
diagnosticDossierReader.setRepository(dossierJpaRepository);
diagnosticDossierReader.setMethodName("listForBatch");
// doing some stuff with origin
return diagnosticDossierReader;
}
ExceptionHandlerExceptionResolver[199] - Resolved [org.springframework.beans.factory.BeanCreationException:
Error creating bean with name 'scopedTarget.dossierToDiagnosticReader': Scope 'job' is not active for the current thread;
consider defining a scoped proxy for this bean if you intend to refer to it from a singleton; nested exception is java.lang.IllegalStateException: No context holder available for job scope]
I downloaded your project and was able to reproduce the case. There are two issues with your example:
You are defining a job scoped bean in your app but the JobScope is not defined in your context (and you are not using #EnableBatchProcessing annotation that adds it automatically to the context). If you want to use the job scope without #EnableBatchProcessing, you need to add it manually to the context.
Your test fails because there is no job running during your test. Job scoped beans are lazily instantiated when a job is actually run. Since your test does not start a job, the bean is not able to be proxied correctly.
Your test does not seem to test a batch job, I would exclude the job scoped bean from the test's context.
Bug resolve in spring boot 2.2.1 https://github.com/spring-projects/spring-boot/issues/18714

Spring Boot: Configure Job Scheduler Pool via Annotation

I have a Spring Boot Application with a bunch of background jobs. I have added the following Annotation on my main application class:
#SpringBootApplication
#EnableScheduling
public class MyApplication {
In the job class, I have following configuration:
#Component
public class MyTask {
#Scheduled(fixedDelay = 14400000)
public void doSomething()
Right now, Spring Boot is executing the jobs in a sequential manner, i.e., one job at a time. This seems most likely due to a single thread based pool.
Is there any Annotation/property that can be used to increase the thread pool size?
Till now, I have found a solution here, but it requires writing a new Configuration class.
Ideally, it should be a property in application.properties file.
I usually don't put business logic inside a #Scheduled method, instead, I call another method in other component and this method has the #Async annotation. When your scheduled job is fired, then it calls the async method in another thread and you scheduler is free to run other jobs.
Check more how to do it here: https://docs.spring.io/spring/docs/current/spring-framework-reference/html/scheduling.html#scheduling-annotation-support
I don't see a property for this in https://docs.spring.io/spring-boot/docs/current/reference/html/common-application-properties.html and I don't see any annotation in the docs.
If you want it configurable at that level, just create your own custom properties, which you inject into the other solution you found.

Spring Data CrudRepository and Transactions

I'm trying to implement transactions on a CrudRepository Interface. I'm a beginner with this and my current problem is that when receiving a lot of requests from different clients, I'm sometimes getting a duplicate.
To avoid that I wanted to use SQL Transactions and their implementation with Spring but I'm unable to get it working.
Here is how I've tried to do it :
#Repository
#EnableTransactionManagement
#Transactional
public interface ApplicationPackageDao extends CrudRepository<ApplicationPackage, Long> {
/**
* Find if a record exists for this package name ,
* #param packageName
* #return
*/
#Transactional
ApplicationPackage findByPackageName(String packageName);
}
However it doesn't seem to work.
I tried to add the #Transactionnal annotations earlier in the Java methods I'm calling but I can't get it working either.
How am I supposed to work with transactions on CrudRepository ?
Or am I using completely the wrong thing?
In addition to crm86's answer some more notes to the #Transactional annotation:
It seems to be best practice to annotate the entry points into your application (e.g. your web controller methods or the main method of a scheduled batch). By using the annotation attribute TxType you can ensure constraints/conditions in methods which are located deeper in your application (e.g. TxType.MANDATORY would throw if no trx-context is running, etc.).
The #Transactional annotation has only an effect if the class is loaded as spring bean (e.g. #Component annotation at class level).
Remember that only RuntimeException's lead to a rollback. If you want a checked Exception leading to a rollback you have to enumerate each such Exception by using the attribute rollbackOn.
The annotation at class level is valid for all public methods of this class. Method level annotations override those at the class level. The repeated annotation in your example above (first at class level, then at method level) has no effect.
What I suggest:
Check your context and configuration classes with #Configuration annotation. From the documentation:
The #EnableTransactionManagement annotation provides equivalent
support if you are using Java based configuration. Simply add the
annotation to a #Configuration class
#EnableTransactionManagement and only looks
for #Transactional on beans in the same application context they are
defined in
Then you could use #Transactional in your service even in a method
Hope it helps

Categories