Dynamic datasource routing - DataSource router not initialized - java

I'm referring to this article, in which we can use the AbstractRoutingDataSource from Spring Framework to dynamically change the data source used by the application. I'm using Mybatis (3.3.0) with Spring (4.1.6.RELEASE). I want to switch to the backup database if exception occurs while getting data from main db. In this example, i have used hsql and mysql db.
RoutingDataSource:
public class RoutingDataSource extends AbstractRoutingDataSource {
#Override
protected Object determineCurrentLookupKey() {
return DataSourceContextHolder.getTargetDataSource();
}
}
DataSourceContextHolder:
public class DataSourceContextHolder {
private static final ThreadLocal<DataSourceEnum> contextHolder = new ThreadLocal<DataSourceEnum>();
public static void setTargetDataSource(DataSourceEnum targetDataSource) {
contextHolder.set(targetDataSource);
}
public static DataSourceEnum getTargetDataSource() {
return (DataSourceEnum) contextHolder.get();
}
public static void resetDefaultDataSource() {
contextHolder.remove();
}
}
ApplicationDataConfig:
#Configuration
#MapperScan(basePackages = "com.sample.mapper")
#ComponentScan("com.sample.config")
#PropertySource(value = {"classpath:app.properties"},
ignoreResourceNotFound = true)
public class ApplicationDataConfig {
#Bean
public static PropertySourcesPlaceholderConfigurer propertySourcesPlaceholderConfigurer() {
PropertySourcesPlaceholderConfigurer configurer =
new PropertySourcesPlaceholderConfigurer();
return configurer;
}
#Bean
public SqlSessionFactoryBean sqlSessionFactoryBean() throws Exception {
SqlSessionFactoryBean sessionFactory = new SqlSessionFactoryBean();
RoutingDataSource routingDataSource = new RoutingDataSource();
routingDataSource.setDefaultTargetDataSource(dataSource1());
Map<Object, Object> targetDataSource = new HashMap<Object, Object>();
targetDataSource.put(DataSourceEnum.HSQL, dataSource1());
targetDataSource.put(DataSourceEnum.BACKUP, dataSource2());
routingDataSource.setTargetDataSources(targetDataSource);
sessionFactory.setDataSource(routingDataSource);
sessionFactory.setTypeAliasesPackage("com.sample.common.domain");
sessionFactory.setMapperLocations(
new PathMatchingResourcePatternResolver()
.getResources("classpath*:com/sample/mapper/**/*.xml"));
return sessionFactory;
}
#Bean
public DataSource dataSource1() {
return new EmbeddedDatabaseBuilder().setType(EmbeddedDatabaseType.HSQL).addScript(
"classpath:database/app-hsqldb-schema.sql").addScript(
"classpath:database/app-hsqldb-datascript.sql").build();
}
#Bean
public DataSource dataSource2() {
PooledDataSourceFactory pooledDataSourceFactory = new PooledDataSourceFactory();
pooledDataSourceFactory.setProperties(jdbcProperties());
return pooledDataSourceFactory.getDataSource();
}
#Bean
protected Properties jdbcProperties() {
//Get the data from properties file
Properties jdbcProperties = new Properties();
jdbcProperties.setProperty("url", datasourceUrl);
jdbcProperties.setProperty("driver", datasourceDriver);
jdbcProperties.setProperty("username", datasourceUsername);
jdbcProperties.setProperty("password", datasourcePassword);
jdbcProperties.setProperty("poolMaximumIdleConnections", maxConnectionPoolSize);
jdbcProperties.setProperty("poolMaximumActiveConnections", minConnectionPoolSize);
return jdbcProperties;
}
}
Client:
#Autowired
private ApplicationMapper appMapper;
public MyObject getObjectById(String Id) {
MyObject myObj = null;
try{
DataSourceContextHolder.setTargetDataSource(DataSourceEnum.HSQL);
myObj = appMapper.getObjectById(Id);
}catch(Throwable e){
DataSourceContextHolder.setTargetDataSource(DataSourceEnum.BACKUP);
myObj = appMapper.getObjectById(Id);
}finally{
DataSourceContextHolder.resetDefaultDataSource();
}
return getObjectDetails(myObj);
}
I'm getting the following exception
### Error querying database. Cause: java.lang.IllegalArgumentException: DataSource router not initialized
However i'm able to get things working if i'm using only one db at a time, this means there is no issue with data source configuration.

Can you try this last line once (in same order) :-
targetDataSource.put(DataSourceEnum.HSQL, dataSource1());
targetDataSource.put(DataSourceEnum.BACKUP, dataSource2());
routingDataSource.setTargetDataSources(targetDataSource);
routingDataSource.afterPropertiesSet();

I got the same issue and found a solution using the SchemaExport class of hibernate.
For each DataSourceEnum you can manually initialize the datasource.
here is my detailed answer to my own issue discription

Related

jOOQ with Spring Boot not inserting entity

I faced an issue with jOOQ not inserting entities unless the repository is annotated with #Transactional.
Here's my configuration:
#Configuration
#EnableTransactionManagement
#RequiredArgsConstructor
public class PersistenceConfig {
#Value("${spring.datasource.url}")
private String dbUrl;
#Value("${spring.datasource.username}")
private String dbUser;
#Value("${spring.datasource.password}")
private String dbPassword;
#Bean
#SneakyThrows
public DataSource dataSource() {
HikariConfig config = new HikariConfig();
// https://mariadb.com/kb/en/about-mariadb-connector-j/
config.setDriverClassName(DatabaseDriver.MARIADB.getDriverClassName());
config.setJdbcUrl(dbUrl);
config.setUsername(dbUser);
config.setPassword(dbPassword);
config.setAutoCommit(false);
// https://github.com/brettwooldridge/HikariCP/wiki/MySQL-Configuration
config.addDataSourceProperty("cacheServerConfiguration", true);
config.addDataSourceProperty("useServerPrepStmts", true);
config.addDataSourceProperty("useLocalSessionState", true);
config.addDataSourceProperty("cacheResultSetMetadata", true);
config.addDataSourceProperty("rewriteBatchedStatements", true);
config.addDataSourceProperty("elideSetAutoCommits", true);
config.addDataSourceProperty("maintainTimeStats", false);
config.addDataSourceProperty("cachePrepStmts", true);
config.addDataSourceProperty("prepStmtCacheSize", 350);
config.addDataSourceProperty("prepStmtCacheSqlLimit", 2048);
return new HikariDataSource(config);
}
#Bean
public TransactionAwareDataSourceProxy transactionAwareDataSource() {
return new TransactionAwareDataSourceProxy(dataSource());
}
#Bean
public DataSourceTransactionManager transactionManager() {
return new DataSourceTransactionManager(dataSource());
}
#Bean
public DataSourceConnectionProvider connectionProvider() {
return new DataSourceConnectionProvider(transactionAwareDataSource());
}
#Bean
public ExceptionTranslator exceptionTransformer() {
return new ExceptionTranslator();
}
#Bean
public SpringTransactionProvider springTransactionProvider() {
return new SpringTransactionProvider(transactionManager());
}
#Bean
public DefaultConfiguration configuration() {
DefaultConfiguration jooqConfiguration = new DefaultConfiguration();
jooqConfiguration.set(connectionProvider());
jooqConfiguration.set(new DefaultExecuteListenerProvider(exceptionTransformer()));
jooqConfiguration.set(SQLDialect.MARIADB);
jooqConfiguration.set(springTransactionProvider());
return jooqConfiguration;
}
#Bean
public DefaultDSLContext dsl() {
return new DefaultDSLContext(configuration());
}
#Bean
public TransactionTemplate transactionTemplate() {
return new TransactionTemplate(transactionManager());
}
private static class ExceptionTranslator extends DefaultExecuteListener {
public void exception(ExecuteContext context) {
SQLDialect dialect = context.configuration().dialect();
SQLExceptionTranslator translator
= new SQLErrorCodeSQLExceptionTranslator(dialect.name());
context.exception(translator
.translate("Access database using jOOQ", context.sql(), context.sqlException()));
}
}
}
repository:
#Repository
public class UserRepository extends UserDao {
private final DSLContext dslContext;
public UserRepository(DSLContext dslContext) {
super(dslContext.configuration());
this.dslContext = dslContext;
}
}
So, calling userRepository.insert(...) doesn't actually insert into the database although the logs say that the following:
org.jooq.tools.LoggerListener : Executing query : insert into `user` (...)
org.jooq.tools.LoggerListener : -> with bind values : insert into `user` ...
However, If I overload UserDao's insert method and annotate it with #Transacational - it works, the rows actually get inserted. I suppose I have misconfigured something.
Spring Boot with jOOQ boot starter is used.
The problem is actually with setAutoCommit(false).

JDBC Spring data #Transactional not working

I'm using springboot and spring-data-jdbc.
I wrote this repository class
#Repository
#Transactional(rollbackFor = Exception.class)
public class RecordRepository {
public RecordRepository() {}
public void insert(Record record) throws Exception {
JDBCConfig jdbcConfig = new JDBCConfig();
SimpleJdbcInsert messageInsert = new SimpleJdbcInsert(jdbcConfig.postgresDataSource());
messageInsert.withTableName(record.tableName()).execute(record.content());
throw new Exception();
}
}
Then I wrote a client class that invokes the insert method
#EnableJdbcRepositories()
#Configuration
public class RecordClient {
#Autowired
private RecordRepository repository;
public void insert(Record r) throws Exception {
repository.insert(r);
}
}
I would expect that no record are insert to db when RecordClient's insert() method is invoked, because RecordRespository's insert() throws Exception. Instead the record is added however.
What am I missing?
EDIT. This is the class where I configure my Datasource
#Configuration
#EnableTransactionManagement
public class JDBCConfig {
#Bean
public DataSource postgresDataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName("org.postgresql.Driver");
dataSource.setUrl("jdbc:postgresql://localhost:5432/db");
dataSource.setUsername("postgres");
dataSource.setPassword("root");
return dataSource;
}
}
You have to inject your datasource instead of creating it manually. I guess because #Transactional only works for Spring managed beans. If you create a datasource instance by calling new constructor (like this new JDBCConfig(). postgresDataSource()), you are creating it manually and it's not a Spring managed beans.
#Repository
#Transactional(rollbackFor = Exception.class)
public class RecordRepository {
#Autowired
DataSource dataSource;
public RecordRepository() {}
public void insert(Record record) throws Exception {
SimpleJdbcInsert messageInsert = new SimpleJdbcInsert(dataSource);
messageInsert.withTableName(record.tableName()).execute(record.contents());
throw new Exception();
}
}

TaskConfigurer does not considering schema name for datasource

I have 2 data source. For one data source, I want to use custom schema name. For this reason, I am setting my data source url like
spring.datasource.url=jdbc:postgresql://192.168.33.10/analytics?currentSchema=bahmni_mart_scdf.
But it's creating all the tables in public schema, instead of bahmni_mart_scdf schema.
I already override TaskRepositoryInitializer bean and implements TaskConfigurer.
Here are my implementations
DatabaseConfiguration.class
#Configuration
public class DatabaseConfiguration {
#Bean(name = "martDb")
#ConfigurationProperties(prefix = "spring.ds_mart")
public DataSource martDataSource() {
return DataSourceBuilder.create().build();
}
#Bean(name = "martJdbcTemplate")
public JdbcTemplate martJdbcTemplate(#Qualifier("martDb") DataSource dsMart) {
return new JdbcTemplate(dsMart);
}
#Bean(name = "scdfDb")
#ConfigurationProperties(prefix = "spring.datasource")
public DataSource dataSource() {
return DataSourceBuilder.create().build();
}
#Bean(name = "scdfJdbcTemplate")
public JdbcTemplate scdfJdbcTemplate(#Qualifier("scdfDb") DataSource scdfDb) {
return new JdbcTemplate(scdfDb);
}
}
DataloadTaskConfigurer.class
#Component
public class DataloadTaskConfigurer implements TaskConfigurer {
private DataSource dataSource;
private TaskRepository taskRepository;
private TaskExplorer taskExplorer;
private PlatformTransactionManager transactionManager;
#Autowired
public DataloadTaskConfigurer(#Qualifier("scdfJdbcTemplate") JdbcTemplate jdbcTemplate) {
this.dataSource = jdbcTemplate.getDataSource();
TaskExecutionDaoFactoryBean taskExecutionDaoFactoryBean = new TaskExecutionDaoFactoryBean(dataSource);
this.taskRepository = new SimpleTaskRepository(taskExecutionDaoFactoryBean);
this.taskExplorer = new SimpleTaskExplorer(taskExecutionDaoFactoryBean);
}
#Override
public TaskRepository getTaskRepository() {
return this.taskRepository;
}
#Override
public PlatformTransactionManager getTransactionManager() {
if (this.transactionManager == null) {
this.transactionManager = new DataSourceTransactionManager(this.dataSource);
}
return this.transactionManager;
}
#Override
public TaskExplorer getTaskExplorer() {
return this.taskExplorer;
}
public DataSource getTaskDataSource() {
return this.dataSource;
}
}
TaskConfiguration.class
#Configuration
public class TaskConfiguration {
#Bean
public TaskRepositoryInitializer taskRepositoryInitializerInDataMart(#Qualifier("scdfJdbcTemplate") JdbcTemplate jdbcTemplate) throws Exception {
TaskRepositoryInitializer taskRepositoryInitializer = new TaskRepositoryInitializer();
taskRepositoryInitializer.setDataSource(jdbcTemplate.getDataSource());
taskRepositoryInitializer.afterPropertiesSet();
return taskRepositoryInitializer;
}
}
application.properties
spring.datasource.url=jdbc:postgresql://192.168.33.10/analytics?currentSchema=bahmni_mart_scdf
spring.datasource.username=analytics
spring.datasource.password=""
spring.datasource.driver-class-name=org.postgresql.Driver
spring.ds_mart.url=jdbc:postgresql://192.168.33.10/analytics
spring.ds_mart.username=analytics
spring.ds_mart.password=""
spring.ds_mart.driver-class-name=org.postgresql.Driver
You'll need to not override the TaskRepositoryInitializer, but turn it off all together via spring.cloud.task.initialize.enable=false. From there, using Spring Boot, you'll want to pass in the schema for each datasource:
spring.datasource.schema=schema1.sql
spring.ds_mart.schema=schema2.sql
The problem was with the postgresql driver version. Once I update to latest version (42.2.2) everything working fine as expected

How create table with spring data cassandara?

I have created my own repository like that:
public interface MyRepository extends TypedIdCassandraRepository<MyEntity, String> {
}
So the question how automatically create cassandra table for that? Currently Spring injects MyRepository which tries to insert entity to non-existent table.
So is there a way to create cassandra tables (if they do not exist) during spring container start up?
P.S. It would be very nice if there is just config boolean property without adding lines of xml and creation something like BeanFactory and etc. :-)
Overide the getSchemaAction property on the AbstractCassandraConfiguration class
#Configuration
#EnableCassandraRepositories(basePackages = "com.example")
public class TestConfig extends AbstractCassandraConfiguration {
#Override
public String getKeyspaceName() {
return "test_config";
}
#Override
public SchemaAction getSchemaAction() {
return SchemaAction.RECREATE_DROP_UNUSED;
}
#Bean
public CassandraOperations cassandraOperations() throws Exception {
return new CassandraTemplate(session().getObject());
}
}
You can use this config in the application.properties
spring.data.cassandra.schema-action=CREATE_IF_NOT_EXISTS
You'll also need to Override the getEntityBasePackages() method in your AbstractCassandraConfiguration implementation. This will allow Spring to find any classes that you've annotated with #Table, and create the tables.
#Override
public String[] getEntityBasePackages() {
return new String[]{"com.example"};
}
You'll need to include spring-data-cassandra dependency in your pom.xml file.
Configure your TestConfig.class as below:
#Configuration
#PropertySource(value = { "classpath:Your .properties file here" })
#EnableCassandraRepositories(basePackages = { "base-package name of your Repositories'" })
public class CassandraConfig {
#Autowired
private Environment environment;
#Bean
public CassandraClusterFactoryBean cluster() {
CassandraClusterFactoryBean cluster = new CassandraClusterFactoryBean();
cluster.setContactPoints(env.getProperty("contactpoints from your properties file"));
cluster.setPort(Integer.parseInt(env.getProperty("ports from your properties file")));
return cluster;
}
#Bean
public CassandraConverter converter() {
return new MappingCassandraConverter(mappingContext());
}
#Bean
public CassandraSessionFactoryBean session() throws Exception {
CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
session.setCluster(cluster().getObject());
session.setKeyspaceName(env.getProperty("keyspace from your properties file"));
session.setConverter(converter());
session.setSchemaAction(SchemaAction.CREATE_IF_NOT_EXISTS);
return session;
}
#Bean
public CassandraOperations cassandraTemplate() throws Exception {
return new CassandraTemplate(session().getObject());
}
#Bean
public CassandraMappingContext mappingContext() throws ClassNotFoundException {
CassandraMappingContext mappingContext= new CassandraMappingContext();
mappingContext.setInitialEntitySet(getInitialEntitySet());
return mappingContext;
}
#Override
public String[] getEntityBasePackages() {
return new String[]{"base-package name of all your entity annotated
with #Table"};
}
#Override
protected Set<Class<?>> getInitialEntitySet() throws ClassNotFoundException {
return CassandraEntityClassScanner.scan(getEntityBasePackages());
}
}
This last getInitialEntitySet method might be an Optional one. Try without this too.
Make sure your Keyspace, contactpoints and port in .properties file. Like :
cassandra.contactpoints=localhost,127.0.0.1
cassandra.port=9042
cassandra.keyspace='Your Keyspace name here'
Actually, after digging into the source code located in spring-data-cassandra:3.1.9, you can check the implementation:
org.springframework.data.cassandra.config.SessionFactoryFactoryBean#performSchemaAction
with implementation as following:
protected void performSchemaAction() throws Exception {
boolean create = false;
boolean drop = DEFAULT_DROP_TABLES;
boolean dropUnused = DEFAULT_DROP_UNUSED_TABLES;
boolean ifNotExists = DEFAULT_CREATE_IF_NOT_EXISTS;
switch (this.schemaAction) {
case RECREATE_DROP_UNUSED:
dropUnused = true;
case RECREATE:
drop = true;
case CREATE_IF_NOT_EXISTS:
ifNotExists = SchemaAction.CREATE_IF_NOT_EXISTS.equals(this.schemaAction);
case CREATE:
create = true;
case NONE:
default:
// do nothing
}
if (create) {
createTables(drop, dropUnused, ifNotExists);
}
}
which means you have to assign CREATE to schemaAction if the table has never been created. And CREATE_IF_NOT_EXISTS dose not work.
More information please check here: Why `spring-data-jpa` with `spring-data-cassandra` won't create cassandra tables automatically?

org.hibernate.HibernateException: createQuery is not valid without active transaction #scheduled

I am using scheduled task to update my database like this:
public interface UserRatingManager {
public void updateAllUsers();
}
#Service
public class DefaultUserRatingManager implements UserRatingManager {
#Autowired
UserRatingDAO userRatingDAO;
#Override
#Transactional("txName")
public void updateAllUsers() {
List<String> userIds = userRatingDAO.getAllUserIds();
for (String userId : userIds) {
updateUserRating(userId);
}
}
}
public interface UserRatingDAO extends GenericDAO<UserRating, String> {
public void deleteAll();
public List<String> getAllUserIds();
}
#Repository
public class HibernateUserRatingDAO extends BaseDAO<UserRating, String> implements UserRatingDAO {
#Override
public List<String> getAllUserIds() {
List<String> result = new ArrayList<String>();
Query q1 = getSession().createQuery("Select userId from UserRating");
}
}
I configured the persistence like this:
#Configuration
#ComponentScan({ "com.estartup" })
#PropertySource("classpath:jdbc.properties")
#EnableTransactionManagement
#EnableScheduling
public class PersistenceConfig {
#Autowired
Environment env;
#Scheduled(fixedRate = 5000)
public void run() {
userRatingManager().updateAllUsers();
}
#Bean
public DataSource dataSource() {
DriverManagerDataSource driverManagerDataSource = new DriverManagerDataSource(env.getProperty("connection.url"), env.getProperty("connection.username"), env.getProperty("connection.password"));
driverManagerDataSource.setDriverClassName("com.mysql.jdbc.Driver");
return driverManagerDataSource;
}
public PersistenceConfig() {
super();
}
#Bean
public UserRatingUpdate userRatingUpdate() {
return new UserRatingUpdate();
}
#Bean
public UserRatingManager userRatingManager() {
return new DefaultUserRatingManager();
}
#Bean
public LocalSessionFactoryBean runnableSessionFactory() {
LocalSessionFactoryBean factoryBean = null;
try {
factoryBean = createBaseSessionFactory();
} catch (Exception e) {
e.printStackTrace();
}
return factoryBean;
}
private LocalSessionFactoryBean createBaseSessionFactory() throws IOException {
LocalSessionFactoryBean factoryBean;
factoryBean = new LocalSessionFactoryBean();
Properties pp = new Properties();
pp.setProperty("hibernate.dialect", "org.hibernate.dialect.MySQLDialect");
pp.setProperty("hibernate.max_fetch_depth", "3");
pp.setProperty("hibernate.show_sql", "false");
factoryBean.setDataSource(dataSource());
factoryBean.setPackagesToScan(new String[] { "com.estartup.*" });
factoryBean.setHibernateProperties(pp);
factoryBean.afterPropertiesSet();
return factoryBean;
}
#Bean(name = "txName")
public HibernateTransactionManager runnableTransactionManager() {
HibernateTransactionManager htm = new HibernateTransactionManager(runnableSessionFactory().getObject());
return htm;
}
}
However, when I get to:
Query q1 = getSession().createQuery("Select userId from UserRating");
in the above HibernateUserRatingDAO I get an exception:
org.hibernate.HibernateException: createQuery is not valid without active transaction
at org.hibernate.context.internal.ThreadLocalSessionContext$TransactionProtectionWrapper.invoke(ThreadLocalSessionContext.java:352)
at com.sun.proxy.$Proxy63.createQuery(Unknown Source)
at com.estartup.dao.impl.HibernateUserRatingDAO.getAllUserIds(HibernateUserRatingDAO.java:36)
How can I configure to include my scheduled tasks in transactions ?
EDITED:
Here is the code for BaseDAO
#Repository
public class BaseDAO<T, ID extends Serializable> extends GenericDAOImpl<T, ID> {
private static final Logger logger = LoggerFactory.getLogger(BaseDAO.class);
#Autowired
#Override
public void setSessionFactory(SessionFactory sessionFactory) {
super.setSessionFactory(sessionFactory);
}
public void setTopAndForUpdate(int top, Query query){
query.setLockOptions(LockOptions.UPGRADE);
query.setFirstResult(0);
query.setMaxResults(top);
}
EDITED
Enabling Spring transaction prints the following log:
DEBUG [pool-1-thread-1] org.springframework.transaction.annotation.AnnotationTransactionAttributeSource - Adding transactional method 'updateAllUsers' with attribute: PROPAGATION_REQUIRED,ISOLATION_DEFAULT; 'txName'
What is happening in this case is that since you are using userRatingManager() inside the configuration (where the actual scheduled method exists), the proxy that Spring creates to handle the transaction management for UserRatingUpdate is not being used.
I propose you do the following:
public interface WhateverService {
void executeScheduled();
}
#Service
public class WhateverServiceImpl {
private final UserRatingManager userRatingManager;
#Autowired
public WhateverServiceImpl(UserRatingManager userRatingManager) {
this.userRatingManager = userRatingManager;
}
#Scheduled(fixedRate = 5000)
public void executeScheduled() {
userRatingManager.updateAllUsers()
}
}
Also change your transaction manager configuration code to:
#Bean(name = "txName")
#Autowired
public HibernateTransactionManager runnableTransactionManager(SessionFactory sessionFactory) {
HibernateTransactionManager htm = new HibernateTransactionManager();
htm.setSessionFactory(sessionFactory);
return htm;
}
and remove factoryBean.afterPropertiesSet(); from createBaseSessionFactory
As I already mentioned, I used your code and created a small sample that works for me. Judging by the classes used, I assumed you are using Hibernate Generic DAO Framework. It's a standalone sample, the main() class is Main. Running it you can see the transactional related DEBUG messages in logs that show when a transaction is initiated and committed. You can compare my settings, jars versions used with what you have and see if anything stands out.
Also, as I already suggested you might want to look in the logs to see if proper transactional behavior is being used and compare that with the logs my sample creates.
I tried to replicate your problem so I integrated it in my Hibernate examples on GitHub:
You can run my CompanySchedulerTest and see it's working so this is what I did to run it:
I made sure the application context is aware of our scheduler
<task:annotation-driven/>
The scheduler is defined in its own bean:
#Service
public class CompanyScheduler implements DisposableBean {
private static final Logger LOG = LoggerFactory.getLogger(CompanyScheduler.class);
#Autowired
private CompanyManager companyManager;
private volatile boolean enabled = true;
#Override
public void destroy() throws Exception {
enabled = false;
}
#Scheduled(fixedRate = 100)
public void run() {
if (enabled) {
LOG.info("Run scheduler");
companyManager.updateAllUsers();
}
}
}
My JPA/Hibernate configs are in applicationContext-test.xml and they are configured for JPA according to the Spring framework indications, so you might want to double check your Hibernate settings as well.

Categories