I recently update my spring boot app 2.1.9 to 2.2.0 and i'm facing a problem. When i'm calling "configprops" from actuator endpoint, an exception is throw :
Scope 'job' is not active for the current thread
I reproduce the bug : https://github.com/guillaumeyan/bugspringbatch (just launch the test). Original project come from https://github.com/spring-guides/gs-batch-processing/tree/master/complete
I tried to add :
#Bean
public StepScope stepScope() {
final StepScope stepScope = new StepScope();
stepScope.setAutoProxy(true);
return stepScope;
}
but it does not work (with spring.main.allow-bean-definition-overriding=true)
Here is my configuration of the spring batch
#Bean
#JobScope
public RepositoryItemReader<DossierEntity> dossierToDiagnosticReader(PagingAndSortingRepository<DossierEntity, Long> dossierJpaRepository, #Value("#{jobParameters[origin]}") String origin) {
RepositoryItemReader<DossierEntity> diagnosticDossierReader = new RepositoryItemReader<>();
diagnosticDossierReader.setRepository(dossierJpaRepository);
diagnosticDossierReader.setMethodName("listForBatch");
// doing some stuff with origin
return diagnosticDossierReader;
}
ExceptionHandlerExceptionResolver[199] - Resolved [org.springframework.beans.factory.BeanCreationException:
Error creating bean with name 'scopedTarget.dossierToDiagnosticReader': Scope 'job' is not active for the current thread;
consider defining a scoped proxy for this bean if you intend to refer to it from a singleton; nested exception is java.lang.IllegalStateException: No context holder available for job scope]
I downloaded your project and was able to reproduce the case. There are two issues with your example:
You are defining a job scoped bean in your app but the JobScope is not defined in your context (and you are not using #EnableBatchProcessing annotation that adds it automatically to the context). If you want to use the job scope without #EnableBatchProcessing, you need to add it manually to the context.
Your test fails because there is no job running during your test. Job scoped beans are lazily instantiated when a job is actually run. Since your test does not start a job, the bean is not able to be proxied correctly.
Your test does not seem to test a batch job, I would exclude the job scoped bean from the test's context.
Bug resolve in spring boot 2.2.1 https://github.com/spring-projects/spring-boot/issues/18714
Related
I have a Spring boot application that consumes data from Kafka topic and send email notifications with a data received from Kafka,
#Bean
public EmailService emailService() {
return new EmailServiceImpl(getJavaMailSender());
}
it works perfectly,
but after I added #ConditionalOnBean:
#Bean
#ConditionalOnBean(KafkaTemplate.class)
public EmailService emailService() {
return new EmailServiceImpl(getJavaMailSender());
}
application failed to start:
required a bean of type 'com.acme.EmailService' that could not be
found.
And I can't find any explanation, how it is possible, because KafkaTemplate bean automatically created by Spring in KafkaAutoConfiguration class.
Could you please give me an explanation?
From the documentation:
The condition can only match the bean definitions that have been
processed by the application context so far and, as such, it is
strongly recommended to use this condition on auto-configuration
classes only. If a candidate bean may be created by another
auto-configuration, make sure that the one using this condition runs
after.
This documentation clearly says what might be wrong here. I understand KafkaTemplateConfiguration creates the KafkaTemplate.class. But it may not be added in the bean context while the condition was being checked. Try to use autoconfiguration for KafkaTemplate or make sure the ordering of different configuration classes so that you can have the guarantee of having the KafkaTemplate in bean registry before that conditional check.
We're using Spring Boot 1.5.10 with Spring Data for Apache Cassandra and that's all working well.
We've had a new requirement coming along where we need to connect to a different keyspace while the service is up and running.
Through the use of Spring Cloud Config Server, we can easily set the value of spring.data.cassandra.keyspace-name, however, we're not certain if there's a way that we can dynamically switch (force) the service to use this new keyspace without having to restart if first?
Any ideas or suggestions?
Using #RefreshScope with properties/repositories doesn't work as the keyspace is bound to the Cassandra Session bean.
Using Spring Data Cassandra 1.5 with Spring Boot 1.5 you have at least two options:
Declare a #RefreshScope CassandraSessionFactoryBean, see also CassandraDataAutoConfiguration. This will interrupt all Cassandra operations upon refresh and re-create all dependant beans.
Listen to RefreshScopeRefreshedEvent and change the keyspace via USE my-new-keyspace;. This approach is less invasive and doesn't interrupt running queries. You'd basically use an event listener.
#Component
class MySessionRefresh {
private final Session session;
private final Environment environment;
// omitted constructors for brevity
#EventListener
#Order(Ordered.LOWEST_PRECEDENCE)
public void handle(RefreshScopeRefreshedEvent event) {
String keyspace = environment.getProperty("spring.data.cassandra.keyspace-name");
session.execute("USE " + keyspace + ";");
}
}
With Spring Data Cassandra 2, we introduced the SessionFactory abstraction providing AbstractRoutingSessionFactory for code-controlled routing of CQL/session calls.
Yes, you can use the #RefreshScope annotation on a the bean(s) holding the spring.data.cassandra.keyspace-name value.
After changing the config value through Spring Cloud Config Server, you have to issue a POST on the /refresh endpoint of your application.
From the Spring cloud documentation:
A Spring #Bean that is marked as #RefreshScope will get special treatment when there is a configuration change. This addresses the problem of stateful beans that only get their configuration injected when they are initialized. For instance if a DataSource has open connections when the database URL is changed via the Environment, we probably want the holders of those connections to be able to complete what they are doing. Then the next time someone borrows a connection from the pool he gets one with the new URL.
From the RefreshScope class javadoc:
A Scope implementation that allows for beans to be refreshed dynamically at runtime (see refresh(String) and refreshAll()). If a bean is refreshed then the next time the bean is accessed (i.e. a method is executed) a new instance is created. All lifecycle methods are applied to the bean instances, so any destruction callbacks that were registered in the bean factory are called when it is refreshed, and then the initialization callbacks are invoked as normal when the new instance is created. A new bean instance is created from the original bean definition, so any externalized content (property placeholders or expressions in string literals) is re-evaluated when it is created.
I wanted to introduce #Async methods (for sending mails in parallel) in my SpringBoot application.
But when I put the #EnableAsync annotation on our application's main #Configuration class (annotated with #SpringBootApplication), the Flyway DB migrations are executed before the DataSourceInitializer (which runs schema.sql and data.sql for my tests) executed.
The first operation involving a 'should-be-migrated' database table fails.
Removing the #EnableAsync puts everything back to normal. Why does this happen and how could I fix this (or work around the issue)?
Update Some more findings: #EnableAsync(mode = AdviceMode.ASPECTJ) keeps the original order of DB setup, but the #Async method runs on the same thread as caller thread then. I also saw that the Bean 'objectPostProcessor' is created early (3rd bean) when #EnableAsync is not present, or #EnableAsync(mode = AdviceMode.ASPECTJ) is used. When only #EnableAsync is used, this bean is created much later.
Update 2 While I wasn't able to create a minimal project which reproduces the problem yet, I found out that the proper DB setup order is restored in my affected application when I comment out the #EnableWebSocketMessageBroker in the following:
#Configuration
#EnableWebSocketMessageBroker
public class WebSocketConfig extends AbstractWebSocketMessageBrokerConfigurer
{
...
}
Bean 'webSocketConfig' is the first bean created (as per INFO-level console output) if #EnableWebSocketMessageBroker is present.
It turned out that having both #EnableAsync and #EnableWebSocketMessageBroker present in my application caused the described effect.
Removing one of it, restored the expected behavior, in which case the DataSourceInitializerPostProcessor created the DataSourceInitializer which triggered execution of schema.sql and data.sql, before flyway migrations took place.
When both annotations were present, the registration of the BeanPostProcessor named internalAsyncAnnotationProcessor happened before the DataSourceInitializerPostProcessor was registered.
The cause of the problem was that the registration of internalAsyncAnnotationProcessor caused the creation of the dataSource bean as a side effect. This side effect was caused by spring looking for a TaskExecutor bean to use, for the #Async method execution. spring unexpectedly picked up the clientInboundChannelExecutor bean which was present because of the #EnableWebSocketMessageBroker. Using this bean caused the instantiation of WebSocketMessagingAutoConfiguration which created the objectMapper bean (for json-serialization) which uses services that use DAO-repositories which depend on dataSource. So all those beans got created.
Because DataSourceInitializerPostProcessor wasn't even registered at that time, DataSourceInitializer was created much later, after the flyway migration took place.
The javadoc for #EnableAsync says the following:
By default, a SimpleAsyncTaskExecutor will be used to process async method invocations. Besides, annotated methods having a void return type cannot transmit any exception back to the caller. By default, such uncaught exceptions are only logged.
I assumed, that a SimpleAsyncTaskExecutor will be created to run the #Async methods, but instead spring picked up an existing bean with a matching type.
So the solution for this issue was to implement AsyncConfigurer, and provide my own Executor. This is also suggested in the javadoc of #EnableAsync:
To customize all this, implement AsyncConfigurer and provide:
* your own Executor through the getAsyncExecutor() method, and
* your own AsyncUncaughtExceptionHandler through the getAsyncUncaughtExceptionHandler() method.
With this tweak the DB setup is again executed as expected.
I have a bean (SettingService) which is decorated with the #Transactional annotation and injected into another bean where this bean is invoked on the context refreshed event.
public class DefaultConfigManager
implements ApplicationListener<ContextRefreshedEvent>, ConfigManager {
#Autowired
private SettingService service;
#Override
public void onApplicationEvent( ContextRefreshedEvent event ) {
System.out.println( "Proxy: " + AopUtils.isJdkDynamicProxy( service ) );
String key = service.getSystemSetting( "KEY" );
}
Transactions work generally well and the method above works as expected in Spring 4.1.9, where the println indicates that the SettingService bean is a dynamic JDK proxy (for transaction handling).
Now after upgrading to Spring 4.2.5 this suddenly starts to throw the following error:
org.hibernate.HibernateException: Could not obtain transaction-synchronized Session for current thread
at org.springframework.orm.hibernate4.SpringSessionContext. currentSession(SpringSessionContext.java:134)
at org.hibernate.internal.SessionFactoryImpl. getCurrentSession(SessionFactoryImpl.java:993)
and the println indicates that the SettingService is no longer a proxy / has been decorated, meaning that no transactions will be initiated.
According to the Spring documentation, at the time of ContextRefreshedEvent being published, all beans and post-processors should be finished.
A Hibernate transaction manager is configured in the app context, the tx:annotation-driven element is in place, the #Transactional annotation is put on the implementation (not the interface), there are no circular dependencies in the system. Hibernate version: 4.2.20.Final.
Does this ring a bell with anyone? Are there any typical reasons for why beans no longer should be proxied with transactions on ContextRefreshedEvent in Spring 4.2? Any common mistakes or changes between Spring 4.1 and 4.2 to be aware of?
I got same problem as you have. My workaround is, put #Transactional annotation on class level rather than method level.
During the first execution of the first job in Quartz scheduler, the following exception is thrown by Spring. Note that the job makes an explicit call to applicationContext.getBean(...) in its execution.
Can someone explain the cause of this exception, and, maybe, the way to avoid it ?
Spring version : 4.1.5.RELEASE
Quartz version : 2.1.6
Thanks in advance
2015-07-24 09:20:27,416 ERROR be.citobi.mediquality.schedulers.A4MCubeJob - a4MCubeJob in error
java.lang.IllegalStateException: About-to-be-created singleton instance implicitly appeared through the creation of the factory bean that its bean definition points to
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:374)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1111)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1006)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.getSingletonFactoryBeanForTypeCheck(AbstractAutowireCapableBeanFactory.java:860)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.getTypeForFactoryBean(AbstractAutowireCapableBeanFactory.java:790)
at org.springframework.beans.factory.support.AbstractBeanFactory.isTypeMatch(AbstractBeanFactory.java:542)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doGetBeanNamesForType(DefaultListableBeanFactory.java:436)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeanNamesForType(DefaultListableBeanFactory.java:412)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeanNamesForType(DefaultListableBeanFactory.java:398)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBean(DefaultListableBeanFactory.java:337)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBean(DefaultListableBeanFactory.java:331)
at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:968)
at be.citobi.mediquality.schedulers.A4MCubeJob.execute(A4MCubeJob.java:26)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557)
I'm not sure if my solution is correct, but here is the way how I'm autowiring spring beans to my jobs. SchedulerFactoryBean creation:
//Note: MySpringBean will be automatically autowired here.
//Another possible approach is to use #Autowired and inject necessary bean one level higher
#Bean
public SchedulerFactoryBean scheduler(MySpringBean mySpringBean) {
SchedulerFactoryBean sfb = new SchedulerFactoryBean();
//default configuration goes here
Map<String, Object> schedulerContext = new HashMap<>();
schedulerContext.put("mySpringBean", mySpringBean);
schedulerContext.setSchedulerContextAsMap(schedulerContext);
return sfb;
}
And here is my Job code which suppose to use this spring bean:
public class MyJob extends QuartzJobBean {
private MySpringBean mySpringBean;
public void setMySpringBean(MySpringBean mySpringBean) {
this.mySpringBean = mySpringBean;
}
#Override
public void executeInternal(JobExecutionContext jobContext) throws JobsExecutionException {
//Job logic with usage of mySpringBean
}
}
In my application Quartz jobs didn't require the whole application context - just 3 beans, so I decided to autowire certain beans instead of autowiring whole context
Hope this helps
I find out what the problem was, in Spring config, several beans implementing FactoryBean were declared without using generics. Adding the generics solved the issue.
This was most likely non related to Quartz.