This one is really puzzling me. I have a Spring Boot (2.1.2) app where I am managing two data sources via MyBatis and Spring. I have multiple MyBatis mappers and each one is configured to use a particular data source. The code for that configuration is below:
import org.apache.ibatis.session.SqlSessionFactory;
import org.mybatis.spring.SqlSessionFactoryBean;
import org.mybatis.spring.mapper.MapperFactoryBean;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import javax.sql.DataSource;
#Configuration
public class MyBatisConfig {
private static final String POSTGRES_SESSION_FACTORY = "postgresSessionFactory";
private static final String MYSQL_SESSION_FACTORY = "mySqlDbSessionFactory";
#Bean(name = POSTGRES_SESSION_FACTORY, destroyMethod = "")
#Primary
public SqlSessionFactoryBean postgresSessionFactory(
#Qualifier(DataSourceConfig.PRIMARY_DATA_SOURCE) final DataSource oneDataSource,
ApplicationContext applicationContext) throws Exception {
final SqlSessionFactoryBean sqlSessionFactoryBean = new SqlSessionFactoryBean();
sqlSessionFactoryBean.setConfigLocation(applicationContext.getResource("classpath:mybatis-config.xml"));
sqlSessionFactoryBean.setDataSource(oneDataSource);
SqlSessionFactory sqlSessionFactory;
sqlSessionFactory = sqlSessionFactoryBean.getObject();
sqlSessionFactory.getConfiguration().addMapper(Mapper1.class);
sqlSessionFactory.getConfiguration().addMapper(Mapper2.class);
return sqlSessionFactoryBean;
}
#Bean(name = MYSQL_SESSION_FACTORY, destroyMethod = "")
public SqlSessionFactoryBean mySqlSessionFactory(
#Qualifier(DataSourceConfig.SECONDARY_DATA_SOURCE) final DataSource anotherDataSource,
ApplicationContext applicationContext) throws Exception {
final SqlSessionFactoryBean sqlSessionFactoryBean = new SqlSessionFactoryBean();
sqlSessionFactoryBean.setConfigLocation(applicationContext.getResource("classpath:mybatis-config-sql.xml"));
sqlSessionFactoryBean.setDataSource(anotherDataSource);
final SqlSessionFactory sqlSessionFactory = sqlSessionFactoryBean.getObject();
sqlSessionFactory.getConfiguration().addMapper(Mapper3.class);
return sqlSessionFactoryBean;
}
#Bean
public MapperFactoryBean<Mapper1> accountMapperFactory(#Qualifier(POSTGRES_SESSION_FACTORY) final SqlSessionFactoryBean sqlSessionFactoryBean) throws Exception {
MapperFactoryBean<Mapper1> factoryBean = new MapperFactoryBean<>(Mapper1.class);
factoryBean.setSqlSessionFactory(sqlSessionFactoryBean.getObject());
return factoryBean;
}
#Bean
public MapperFactoryBean<Mapper2> domainMapperFactory(#Qualifier(POSTGRES_SESSION_FACTORY) final SqlSessionFactoryBean sqlSessionFactoryBean) throws Exception {
MapperFactoryBean<Mapper2> factoryBean = new MapperFactoryBean<>(Mapper2.class);
factoryBean.setSqlSessionFactory(sqlSessionFactoryBean.getObject());
return factoryBean;
}
#Bean
public MapperFactoryBean<Mapper3> usageMapperFactory(#Qualifier(MYSQL_SESSION_FACTORY) final SqlSessionFactoryBean sqlSessionFactoryBean) throws Exception {
MapperFactoryBean<Mapper3> factoryBean = new MapperFactoryBean<>(Mapper3.class);
factoryBean.setSqlSessionFactory(sqlSessionFactoryBean.getObject());
return factoryBean;
}
}
If I use my debugger, I can very that at the time these beans are being initialized, all of them are pointing to the correct data source. (Mapper1 and Mapper2's SqlSessionFactorys connect to the postgres datasource, Mapper3's SqlSessionFactory connects to the mysql datasource.
But strangely, when they are injected into a service, all three Mappers are connected to the postgres datasource. I am beyond confused at this point.
The service and injection are quite simple:
#Autowired private Mapper1 mapper1;
#Autowired private Mapper2 mapper2;
#Autowired private Mapper3 mapper3;
However when I call the service and stop it with the debugger, I can see that mapper3 is connected to the wrong datasource (postgres).
Any ideas? Any more information needed?
I had the same effects, when configuring multiple datasources with MyBatis. At runtime, all mappers were using the last defined datasource connection credentials, although during startup everything looked fine.
I could narrow the problem down to the factoryBean definition and setting the configuration. As it seems, the configuration object is shared among all datasources (as I set them here), because I set the same instance from the MyBatisProperties. During runtime the configuration is also the place where MyBatis gets the datasource from the Environment property.
The "magic" happens on factoryBean.getObject() and factoryBean.buildSqlSessionFactory(). Here, the datasource is set into the configuration. When the configuration was not explicitly set, then a new default one is created. Otherwise, the configuration object is reused. Thus when initializing several factory beans with the same configuration object, then the configuration object gets every datasource injected and the last datasource remains.
This might be a bug, because as a Mybatis user I expect Mybatis to use the datasource I provide with factoryBean.setDataSource(aDatasource) So, when I commented out factoryBean.setConfiguration, then everything works as intended, but of course my configuration is not applied.
#Bean("sqlSessionFactoryA")
public SqlSessionFactory sqlSessionFactory(#Qualifier("datasourceA") DataSource aDataSource) throws Exception {
SqlSessionFactoryBean factoryBean = new SqlSessionFactoryBean();
factoryBean.setDataSource(aDataSource);
factoryBean.setConfigurationProperties(properties.getConfigurationProperties());
// factoryBean.setConfiguration(properties.getConfiguration());
return factoryBean.getObject();
}
As a solution I set the MyBatisProperties to prototype scope, so that a new properties object with a new configuration object is created on each injection. This is necessary, beause I dont want to write a copy method and org.apache.ibatis.session.Configuration does not provide a copy constructor.
#Bean
#Primary
#Scope("prototype")
public MybatisProperties myBatisProperties() {
return new MybatisProperties();
}
Now I could also apply the configuration and everything worked fine.
org.apache.ibatis.session.Configuration tmpConfiguration = properties.getConfiguration();
factoryBean.setConfiguration(tmpConfiguration);
Related
I am writing a library which retrieves data from a specific data schema. This library holds a Datasource object which can be anything. Right now I have defined the name of the datasource within the library which I would like to avoid.
import javax.sql.DataSource
public class MyLibraryDao.java {
private static final DS_NAME = "MY_DS_NAME";
#Resource(name = "default", lookup = DS_NAME , type = DataSource.class)
protected DataSource dataSource;
}
The DAO class is not directly exposed to the client. There is a service layer inbetween:
import javax.inject.Inject;
import javax.enterprise.context.ApplicationScoped;
import javax.enterprise.inject.Model;
#ApplicationScoped
#Model
public class MyLibraryService {
#Inject
MyLibraryDao dao;
}
Now, how would I pass the datasource object to the library?
I assume I need to create a constructor in the DAO with takes a DataSource but what about the service?
The library will be used in a CDI environment.
First things first your library needs a datasource, let's declare the dependency:
public class MyLibraryDao {
#Inject
protected DataSource dataSource;
}
Now the rest of the application that is using the library is responsible to provide a datasource to CDI; a simple way is:
// Example; your implementation may vary
public class AppDatasourceProducer {
private static final DS_NAME = "MY_APP_DS_NAME";
#Resource(name = "default", lookup = DS_NAME , type = DataSource.class)
protected DataSource dataSource;
#Produces
#ApplicationScoped
public DataSource getDatasource() {
return dataSource;
}
}
What's changed? Now your application is responsible for knowing the datasource name AND providing the datasource itself. The example above can work in JEE environments that honor the #Resource annotation. Using a different implementation for the provider would work in e.g. a desktop environment (standalone application), making your library reusable.
The exact datasource name may be fixed, just like in the example, or read from configuration, e.g. from system properties (like mentioned in a comment); e.g.:
// Example 2, DS name from system properties
#ApplicationScoped
public class AppDatasourceProducer {
protected DataSource dataSource;
#PostConstruct
void init() throws Exception {
String dsName = System.getProperty("XXXXX");
InitialContext ic = new InitialContext();
dataSource = (DataSource) ic.lookup(dsName);
}
#Produces
#ApplicationScoped
public DataSource getDatasource() {
return dataSource;
}
}
Going further:
An application that uses your library may be using several datasources, for whatever reason. You may want to provide a qualifier to specify the datasource to be used by your app.
I used field injection in MyLibraryDao for simplicity. If you change to constructor injection then, at least MyLibraryDao, will be usable in non-CDI environments as well, i.e. if you have obtained a DataSource somehow, you can now do new MyLibraryDao(datasource). Even more reusability.
I have a data source configuration class that looks as follows, with separate DataSource beans for testing and non-testing environments using JOOQ. In my code, I do not use DSLContext.transaction(ctx -> {...} but rather mark the method as transactional, so that JOOQ defers to Spring's declarative transactions for transactionality. I am using Spring 4.3.7.RELEASE.
I have the following issue:
During testing (JUnit), #Transactional works as expected. A single method is transactional no matter how many times I use the DSLContext's store() method, and a RuntimeException triggers a rollback of the entire transaction.
During actual production runtime, #Transactional is completely ignored. A method is no longer transactional, and TransactionSynchronizationManager.getResourceMap() holds two separate values: one showing to my connection pool (which is not transactional), and one showing the TransactionAwareDataSourceProxy).
In this case, I would have expected only a single resource of type TransactionAwareDataSourceProxy which wraps my DB CP.
After much trial and error using the second set of configuration changes I made (noted below with "AFTER"), #Transactional works correctly as expected even during runtime, though TransactionSynchronizationManager.getResourceMap() holds the following value:
In this case, my DataSourceTransactionManager seems to not even know the TransactionAwareDataSourceProxy (most likely due to my passing it the simple DataSource, and not the proxy object), which seems to completely 'skip' the proxy anyway.
My question is: the initial configuration that I had seemed correct, but did not work. The proposed 'fix' works, but IMO should not work at all (since the transaction manager does not seem to be aware of the TransactionAwareDataSourceProxy).
What is going on here? Is there a cleaner way to fix this issue?
BEFORE (not transactional during runtime)
#Configuration
#EnableTransactionManagement
#RefreshScope
#Slf4j
public class DataSourceConfig {
#Bean
#Primary
public DSLContext dslContext(org.jooq.Configuration configuration) throws SQLException {
return new DefaultDSLContext(configuration);
}
#Bean
#Primary
public org.jooq.Configuration defaultConfiguration(DataSourceConnectionProvider dataSourceConnectionProvider) {
org.jooq.Configuration configuration = new DefaultConfiguration()
.derive(dataSourceConnectionProvider)
.derive(SQLDialect.POSTGRES_9_5);
configuration.set(new DeleteOrUpdateWithoutWhereListener());
return configuration;
}
#Bean
public DataSourceTransactionManager transactionManager(DataSource dataSource) {
return new DataSourceTransactionManager(dataSource);
}
#Bean
public DataSourceConnectionProvider dataSourceConnectionProvider(DataSource dataSource) {
return new DataSourceConnectionProvider(dataSource);
}
#Configuration
#ConditionalOnClass(EmbeddedPostgres.class)
static class EmbeddedDataSourceConfig {
#Value("${spring.jdbc.port}")
private int dbPort;
#Bean(destroyMethod = "close")
public EmbeddedPostgres embeddedPostgres() throws Exception {
EmbeddedPostgres embeddedPostgres = EmbeddedPostgresHelper.startDatabase(dbPort);
return embeddedPostgres;
}
#Bean
#Primary
public DataSource dataSource(EmbeddedPostgres embeddedPostgres) throws Exception {
DataSource dataSource = embeddedPostgres.getPostgresDatabase();
return new TransactionAwareDataSourceProxy(dataSource);
}
}
#Configuration
#ConditionalOnMissingClass("com.opentable.db.postgres.embedded.EmbeddedPostgres")
#RefreshScope
static class DefaultDataSourceConfig {
#Value("${spring.jdbc.url}")
private String url;
#Value("${spring.jdbc.username}")
private String username;
#Value("${spring.jdbc.password}")
private String password;
#Value("${spring.jdbc.driverClass}")
private String driverClass;
#Value("${spring.jdbc.MaximumPoolSize}")
private Integer maxPoolSize;
#Bean
#Primary
#RefreshScope
public DataSource dataSource() {
log.debug("Connecting to datasource: {}", url);
HikariConfig hikariConfig = buildPool();
DataSource dataSource = new HikariDataSource(hikariConfig);
return new TransactionAwareDataSourceProxy(dataSource);
}
private HikariConfig buildPool() {
HikariConfig config = new HikariConfig();
config.setJdbcUrl(url);
config.setUsername(username);
config.setPassword(password);
config.setDriverClassName(driverClass);
config.setConnectionTestQuery("SELECT 1");
config.setMaximumPoolSize(maxPoolSize);
return config;
}
}
AFTER (transactional during runtime, as expected, all non-listed beans identical to above)
#Configuration
#EnableTransactionManagement
#RefreshScope
#Slf4j
public class DataSourceConfig {
#Bean
public DataSourceConnectionProvider dataSourceConnectionProvider(TransactionAwareDataSourceProxy dataSourceProxy) {
return new DataSourceConnectionProvider(dataSourceProxy);
}
#Bean
public TransactionAwareDataSourceProxy transactionAwareDataSourceProxy(DataSource dataSource) {
return new TransactionAwareDataSourceProxy(dataSource);
}
#Configuration
#ConditionalOnMissingClass("com.opentable.db.postgres.embedded.EmbeddedPostgres")
#RefreshScope
static class DefaultDataSourceConfig {
#Value("${spring.jdbc.url}")
private String url;
#Value("${spring.jdbc.username}")
private String username;
#Value("${spring.jdbc.password}")
private String password;
#Value("${spring.jdbc.driverClass}")
private String driverClass;
#Value("${spring.jdbc.MaximumPoolSize}")
private Integer maxPoolSize;
#Bean
#Primary
#RefreshScope
public DataSource dataSource() {
log.debug("Connecting to datasource: {}", url);
HikariConfig hikariConfig = buildPoolConfig();
DataSource dataSource = new HikariDataSource(hikariConfig);
return dataSource; // not returning the proxy here
}
}
}
I'll turn my comments into an answer.
The transaction manager should NOT be aware of the proxy. From the documentation:
Note that the transaction manager, for example
DataSourceTransactionManager, still needs to work with the underlying
DataSource, not with this proxy.
The class TransactionAwareDataSourceProxy is a special purpose class that is not needed in most cases. Anything that is interfacing with your data source through the Spring framework infrastructure should NOT have the proxy in their chain of access. The proxy is intended for code that cannot interface with the Spring infrastructure. For example, a third party library that was already setup to work with JDBC and did not accept any of Spring's JDBC templates. This is stated in the same docs as above:
This proxy allows data access code to work with the plain JDBC API and
still participate in Spring-managed transactions, similar to JDBC code
in a J2EE/JTA environment. However, if possible, use Spring's
DataSourceUtils, JdbcTemplate or JDBC operation objects to get
transaction participation even without a proxy for the target
DataSource, avoiding the need to define such a proxy in the first
place.
If you do not have any code that needs to bypass the Spring framework then do not use the TransactionAwareDataSourceProxy at all. If you do have legacy code like this then you will need to do what you already configured in your second setup. You will need to create two beans, one which is the data source, and one which is the proxy. You should then give the data source to all of the Spring managed types and the proxy to the legacy types.
Update 1 (scroll down)
The setup is as follows:
Our application database is constructed and used by two separate users:
SCHEMA - User that has authority to create and grant permissions on tables and
APP - User who is granted permissions (INSERT, UPDATE, DELETE, SELECT) (by SCHEMA) for above tables to be used.
This enables us to lock any schema changes until needed so no profound changes happen through the app user.
I am running integration tests with a live Oracle database that contains both these users. on the class itself, I use the #SqlConfig(dataSource = "schemaDataSource", transactionManager = "transactionManagerSchema").
On the test method I place two #Sql that fail because in the SqlScriptsTestExecutionListener class, the transaction is not managing the same datasource. (hence the error message further below).
I've tried setting the datasource to the transaction manager manually as shown in my config class below, however some unknown process seems to override it every time. (My best guess is through the #DataJpaTest annotation but I don't know exactly which of the 11 Auto Configurations does it, as you can see I've already disabled a couple with no effect).
Test Class:
#RunWith(SpringRunner.class)
#DataJpaTest(excludeAutoConfiguration = {TestDatabaseAutoConfiguration.class, DataSourceAutoConfiguration.class})
#FlywayTest
#SqlConfig(dataSource = TestDataSourceConfig.SCHEMA_DATA_SOURCE, transactionManager = "transactionManagerSchema")
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE, classes = {TestDataSourceConfig.class, TestFlywayConfig.class})
#EntityScan(basePackageClasses = BaseEnum.class)
public class NotificationTypeEnumTest {
#Autowired
private EntityManager em;
#Test
#Sql(statements = {"INSERT INTO MYAPP_ENUM (ENUM_ID, \"TYPE\", \"VALUE\") VALUES (MYAPP_ENUM_ID_SEQ.nextval, '" + NotificationTypeEnum.DTYPE + "', 'foo')"}, executionPhase = Sql.ExecutionPhase.BEFORE_TEST_METHOD)
#Sql(statements = {"DELETE FROM MYAPP_ENUM"}, executionPhase = Sql.ExecutionPhase.AFTER_TEST_METHOD)
public void canFetchNotificationTypeEnum() throws Exception {
TypedQuery<NotificationTypeEnum> query = em.createQuery("select a from NotificationTypeEnum a", NotificationTypeEnum.class);
NotificationTypeEnum result = query.getSingleResult();
assertEquals("foo", result.getValue());
assertEquals(NotificationTypeEnum.DTYPE, result.getConfigType());
}
}
DataSource and TM config:
#Slf4j #Configuration #EnableTransactionManagement
public class TestDataSourceConfig {
public static final String SCHEMA_DATA_SOURCE = "schemaDataSource";
public static final String SCHEMA_TRANSACTION_MANAGER = "schemaTransactionManager";
/*Main Datasource and supporting beans*/
#Bean #Primary #ConfigurationProperties(prefix = "spring.datasource")
public DataSource dataSource() { return new DriverManagerDataSource(); }
#Bean #Primary #Autowired
public PlatformTransactionManager transactionManager(EntityManagerFactory emf) { return new JpaTransactionManager(emf); }
#Bean(name = SCHEMA_DATA_SOURCE) #ConfigurationProperties(prefix = "myapp.datasource.test_schema")
public DataSource schemaDataSource() { return new DriverManagerDataSource(); }
#Bean(name = SCHEMA_TRANSACTION_MANAGER) #Autowired
public PlatformTransactionManager transactionManagerSchema(#Qualifier(SCHEMA_DATA_SOURCE) DataSource dataSource) {
JpaTransactionManager jpaTransactionManager = new JpaTransactionManager();
jpaTransactionManager.setDataSource(dataSource);
return jpaTransactionManager;
}
}
The full error that I couldn't fit in the title is:
java.lang.IllegalStateException: Failed to execute SQL scripts for test context
...
SOME LONG STACK TRACE
...
the configured DataSource [org.springframework.jdbc.datasource.DriverManagerDataSource] (named 'schemaDataSource') is not the one associated with transaction manager [org.springframework.orm.jpa.JpaTransactionManager] (named 'transactionManagerSchema').
When there is a single DataSource, it appears the Spring auto-configuration model works fine, however, as soon as there are 2 or more, the assumptions break down and the programmer needs to manually fill in the sudden (plentiful) gaps in configuration required.
Am I missing some fundamental understanding surrounding DataSources and TransactionManagers?
Update 1
After some debugging, I have discovered the afterPropertiesSet() method is being called on the bean I created when the TransactionManager is retrieved for use with the #Sql script annotation. This causes whatever EntityManagerFactory it owns (i.e. JpaTransactionManager.entityManagerFactory) to set the datasource according to its configured EntityManagerFactoryInfo.getDataSource(). The EntityManagerFactory itself is being set as a result of the JpaTransactionManager.setBeanFactory method being called (as it implements BeanFactoryAware).
here is the spring code:
// JpaTransactionManager.java
#Override
public void setBeanFactory(BeanFactory beanFactory) throws BeansException {
if (getEntityManagerFactory() == null) {
if (!(beanFactory instanceof ListableBeanFactory)) {
throw new IllegalStateException("Cannot retrieve EntityManagerFactory by persistence unit name " +
"in a non-listable BeanFactory: " + beanFactory);
}
ListableBeanFactory lbf = (ListableBeanFactory) beanFactory;
setEntityManagerFactory(EntityManagerFactoryUtils.findEntityManagerFactory(lbf, getPersistenceUnitName()));
}
}
I then tried creating my own EntityManagerFactory bean to attempt to inject it into my created transaction manager but this seems to be opening up Hibernate Specific classes and I wish to stay abstracted at the JPA level. As well as it being difficult to configure at first glance.
Finally, a JPA only solution!
The Solution was to control the creation of the EntityManagerFactoryBeans using the provided spring EntityManagerFactoryBuilder component and inject the EntityManager into the test using the #PersistenceContext annotation.
#SqlConfig(dataSource = TestDataSourceConfig.SCHEMA_DATA_SOURCE, transactionManager = SCHEMA_TRANSACTION_MANAGER, transactionMode = SqlConfig.TransactionMode.ISOLATED)
...
public class MyJUnitTest {
#PersistenceContext(unitName = "pu")
private EntityManager em;
...
#Test
#Sql(statements = {"SOME SQL USING THE PRIVILEGED SCHEMA CONNECTION"}, ...)
public void myTest() {
em.createQuery("...").getResultList() // uses the APP database user.
}
}
Below is the configuration for both datasources. The application related DataSource beans all have #Primary in their definition to disambiguate any #Autowired dependencies. there are no Hibernate specific classes needed other than the Automatic hibernate config done through the #DataJpaTest class.
#Configuration
#EnableTransactionManagement
#EnableConfigurationProperties(JpaProperties.class)
public class TestDataSourceConfig {
public static final String SCHEMA_DATA_SOURCE = "schemaDS";
public static final String SCHEMA_TRANSACTION_MANAGER = "schemaTM";
public static final String SCHEMA_EMF = "schemaEMF";
/*Main Datasource and supporting beans*/
#Bean
#Primary
#ConfigurationProperties(prefix = "spring.datasource")
public DataSource dataSource() {
return new DriverManagerDataSource();
}
#Bean #Primary #Autowired
public PlatformTransactionManager transactionManager(EntityManagerFactory emf) { return new JpaTransactionManager(emf); }
#Bean #Primary
public LocalContainerEntityManagerFactoryBean emfBean(
EntityManagerFactoryBuilder entityManagerFactoryBuilder,
DataSource datasource,
JpaProperties jpaProperties) {
return entityManagerFactoryBuilder
.dataSource(datasource)
.jta(false)
.packages(CourseOffering.class)
.persistenceUnit("pu")
.properties(jpaProperties.getProperties())
.build();
}
#Bean(name = SCHEMA_EMF)
public LocalContainerEntityManagerFactoryBean emfSchemaBean(
EntityManagerFactoryBuilder entityManagerFactoryBuilder,
#Qualifier(SCHEMA_DATA_SOURCE) DataSource schemaDataSource,
JpaProperties jpaProperties) {
return entityManagerFactoryBuilder
.dataSource(schemaDataSource)
.jta(false)
.packages(CourseOffering.class)
.persistenceUnit("spu")
.properties(jpaProperties.getProperties())
.build();
}
#Bean(name = SCHEMA_DATA_SOURCE)
#ConfigurationProperties(prefix = "myapp.datasource.test_schema")
public DataSource schemaDataSource() { return new DriverManagerDataSource(); }
#Bean(name = SCHEMA_TRANSACTION_MANAGER)
public PlatformTransactionManager transactionManagerSchema(
#Qualifier(SCHEMA_EMF) EntityManagerFactory emfSchemaBean) {
JpaTransactionManager jpaTransactionManager = new JpaTransactionManager();
jpaTransactionManager.setEntityManagerFactory(emfSchemaBean);
return jpaTransactionManager;
}
}
Actual Test Class:
#RunWith(SpringRunner.class) // required for all spring tests
#DataJpaTest(excludeAutoConfiguration = {TestDatabaseAutoConfiguration.class, DataSourceAutoConfiguration.class}) // this stops the default data source and database being configured.
#SqlConfig(dataSource = TestDataSourceConfig.SCHEMA_DATA_SOURCE, transactionManager = SCHEMA_TRANSACTION_MANAGER, transactionMode = SqlConfig.TransactionMode.ISOLATED) // make sure the #Sql statements are run using the SCHEMA datasource and txManager in an isolated way so as not to cause problems when running test methods requiring these statements to be run.
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE, classes = {TestDataSourceConfig.class})
#TestExecutionListeners({
SqlScriptsTestExecutionListener.class, // enables the #Sql script annotations to work.
SpringBootDependencyInjectionTestExecutionListener.class, // injects spring components into the test (i.e. the EntityManager)
TransactionalTestExecutionListener.class}) // I have this here even though the #Transactional annotations don't exist yet as I plan on using them in further tests.
public class NotificationTypeEnumTest {
#PersistenceContext(unitName = "pu") // required to inject the correct EntityManager
private EntityManager em;
// these statements are
#Test
#Sql(statements = {"INSERT INTO MYAPP_ENUM (ENUM_ID, \"TYPE\", \"VALUE\") VALUES (MYAPP_ENUM_ID_SEQ.nextval, '" + NotificationTypeEnum.DTYPE + "', 'foo')"}, executionPhase = Sql.ExecutionPhase.BEFORE_TEST_METHOD)
#Sql(statements = {"DELETE FROM MYAPP_ENUM"}, executionPhase = Sql.ExecutionPhase.AFTER_TEST_METHOD)
public void canFetchNotificationTypeEnum() throws Exception {
TypedQuery<NotificationTypeEnum> query = em.createQuery("select a from NotificationTypeEnum a", NotificationTypeEnum.class); // notification type is just a subclass of the BaseEnum type
NotificationTypeEnum result = query.getSingleResult();
assertEquals("foo", result.getValue());
assertEquals(NotificationTypeEnum.DTYPE, result.getConfigType());
}
}
noteworthy classes:
EntityManagerFactoryBuilder - I don't like factory factories, but this one served me well in creating the correct implementation of EntityManagerFactory without depending on any hibernate specific classes. may be injected with #Autowired. The builder bean itself is configured through the HibernateJpaAutoConfiguration class (extends JpaBaseConfiguration) (imported by #DataJpaTest).
JpaProperties - useful for maintaining application.properties config in the resulting entitymanagerfactories. enabled through the #EnableConfigurationProperties(JpaProperties.class) annotation above this config class.
#PersistenceContext(unitName = "...") - I can inject the correct EntityManager in my test class with this annotation.
I am trying to configure multiple JPA entity/transaction managers within the same application context using Spring's #Configuration class.
When the context loads, Spring is having difficulties auto-wiring the beans because they implement the same interfaces.
Unfortunately, I'm using legacy code so I can't auto-wire the beans directly and use the #Qualifier annotations, which is why I'm trying to do it using the configuration class.
Within a #Bean declaration, is there any way to qualify which bean should be injected? I thought that using a direct method call would be enough, but it typically results in errors such as
NoUniqueBeanDefinitionException: No qualifying bean of type
[javax.sql.DataSource] is defined: expected single matching bean but
found 4
Here's an example of what I'm trying to do below:
#Configuration
public class ApplicationConfig {
#Bean(name = "transactionManager1")
public PlatformTransactionManager transactionManager1() {
return new JpaTransactionManager(entityManagerFactory1());
}
#Bean(name = "entityManagerFactory1")
public EntityManagerFactory entityManagerFactory1() {
...
LocalContainerEntityManagerFactoryBean factory = new LocalContainerEntityManagerFactoryBean();
factory.setDataSource(dataSource1());
...
}
#Bean(destroyMethod = "")
#ConfigurationProperties(prefix = "datasource.test1")
public JndiObjectFactoryBean jndiObjectFactoryBean1() {
return new JndiObjectFactoryBean();
}
#Bean(name = "dataSource1")
public DataSource dataSource1() {
JndiDataSourceLookup lookup = new JndiDataSourceLookup();
return lookup.getDataSource(jndiObjectFactoryBean1().getJndiName());
}
#Bean(name = "transactionManager2")
public PlatformTransactionManager transactionManager2() {
return new JpaTransactionManager(entityManagerFactory2());
}
#Bean(name = "entityManagerFactory2")
public EntityManagerFactory entityManagerFactory2() {
...
LocalContainerEntityManagerFactoryBean factory = new LocalContainerEntityManagerFactoryBean();
factory.setDataSource(dataSource2());
...
}
#Bean(destroyMethod = "")
#ConfigurationProperties(prefix = "datasource.test2")
public JndiObjectFactoryBean jndiObjectFactoryBean2() {
return new JndiObjectFactoryBean();
}
#Bean(name = "dataSource2")
public DataSource dataSource2() {
JndiDataSourceLookup lookup = new JndiDataSourceLookup();
return lookup.getDataSource(jndiObjectFactoryBean2().getJndiName());
}
I suppose I could try to inject the beans directly via the Spring context's getBean() method, but is there a cleaner way of doing this?
I'm not too familiar with the #Primary annotation, but based on what I've read I don't know how spring would autowire the secondary data source in this case since it looks like it would always pick the beans with #Primary first.
If you cannot change the injection sites to add qualifiers, then you're going to have to create a delegating DataSource based on some logic (which you haven't detailed in the question).
Something like this.
#Primary #Bean
public DelegatingDataSource delegatingDataSource(List<DataSource> sources) {
return new DelegatingDataSource() {
#Override
public DataSource getTargetDataSource() {
// decide which dataSource to delegate to
return sources.get(0);
}
}
}
I've used DelegatingDataSource, but that may not be able to provide what you need. You may need to get more advanced with some kind of interceptor/aspect to get details of the caller on which to base the DataSource selection.
However it's implemented, you need to specify a #Primary bean and use it as a proxy.
I am trying to configure a couple of datasources within Spring Batch. On startup, Spring Batch is throwing the following exception:
To use the default BatchConfigurer the context must contain no more thanone DataSource, found 2
Snippet from Batch Configuration
#Configuration
#EnableBatchProcessing
public class BatchJobConfiguration {
#Primary
#Bean(name = "baseDatasource")
public DataSource dataSource() {
// first datasource definition here
}
#Bean(name = "secondaryDataSource")
public DataSource dataSource2() {
// second datasource definition here
}
...
}
Not sure why I am seeing this exception, because I have seen some xml based configuration for Spring batch that declare multiple datasources. I am using Spring Batch core version 3.0.1.RELEASE with Spring Boot version 1.1.5.RELEASE. Any help would be greatly appreciated.
You must provide your own BatchConfigurer. Spring does not want to make that decision for you
#Configuration
#EnableBatchProcessing
public class BatchConfig {
#Bean
BatchConfigurer configurer(#Qualifier("batchDataSource") DataSource dataSource){
return new DefaultBatchConfigurer(dataSource);
}
...
AbstractBatchConfiguration tries to lookup BatchConfigurer in container first, if it is not found then tries to create it itself - this is where IllegalStateException is thrown where there is more than one DataSource bean in container.
The approach to solving the problem is to prevent from creation the DefaultBatchConfigurer bean in AbstractBatchConfiguration.
To do it we hint to create DefaultBatchConfigurer by Spring container using #Component annotation:
The configuration class where #EnableBatchProcessing is placed we can annotate with #ComponentScan that scan the package that contains the empty class that is derived from DefaultBatchConfigurer:
package batch_config;
...
#EnableBatchProcessing
#ComponentScan(basePackageClasses = MyBatchConfigurer.class)
public class MyBatchConfig {
...
}
the full code of that empty derived class is here:
package batch_config.components;
import org.springframework.batch.core.configuration.annotation.DefaultBatchConfigurer;
import org.springframework.stereotype.Component;
#Component
public class MyBatchConfigurer extends DefaultBatchConfigurer {
}
In this configuration the #Primary annotation works for DataSource bean as in the example below:
#Configuration
public class BatchTestDatabaseConfig {
#Bean
#Primary
public DataSource dataSource()
{
return .........;
}
}
This works for the Spring Batch version 3.0.3.RELEASE
The simplest solution to make #Primary annotation on DataSource work might be just adding #ComponentScan(basePackageClasses = DefaultBatchConfigurer.class) along with #EnableBatchProcessing annotation:
#Configuration
#EnableBatchProcessing
#ComponentScan(basePackageClasses = DefaultBatchConfigurer.class)
public class MyBatchConfig {
I would like to provide a solution here, which is very similar to the one answered by #vanarchi, but I managed to put all the necessary configurations into one class.
For the sake of completeness, the solution here assumes that primary datasource is hsql.
#Configuration
#EnableBatchProcessing
public class BatchConfiguration extends DefaultBatchConfigurer {
#Bean
#Primary
public DataSource batchDataSource() {
// no need shutdown, EmbeddedDatabaseFactoryBean will take care of this
EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder();
EmbeddedDatabase embeddedDatabase = builder
.addScript("classpath:org/springframework/batch/core/schema-drop-hsqldb.sql")
.addScript("classpath:org/springframework/batch/core/schema-hsqldb.sql")
.setType(EmbeddedDatabaseType.HSQL) //.H2 or .DERBY
.build();
return embeddedDatabase;
}
#Override
protected JobRepository createJobRepository() throws Exception {
JobRepositoryFactoryBean factory = new JobRepositoryFactoryBean();
factory.setDataSource(batchDataSource());
factory.setTransactionManager(transactionManager());
factory.afterPropertiesSet();
return (JobRepository) factory.getObject();
}
private ResourcelessTransactionManager transactionManager() {
return new ResourcelessTransactionManager();
}
//NOTE: the code below is just to provide developer an easy way to access the in-momery hsql datasource, as we configured it to the primary datasource to store batch job related data. Default username : sa, password : ''
#PostConstruct
public void getDbManager(){
DatabaseManagerSwing.main(
new String[] { "--url", "jdbc:hsqldb:mem:testdb", "--user", "sa", "--password", ""});
}
}
THREE key points in this solution:
This class is annotated with #EnableBatchProcessing and #Configuration, as well as extended from DefaultBatchConfigurer. By doing this, we instruct spring-batch to use our customized batch configurer when AbstractBatchConfiguration tries to lookup BatchConfigurer;
Annotate batchDataSource bean as #Primary, which instruct spring-batch to use this datasource as its datasource of storing the 9 job related tables.
Override protected JobRepository createJobRepository() throws Exception method, which makes the jobRepository bean to use the primary datasource, as well as use a different transactionManager instance from the other datasource(s).
The simplest solution is to extend the DefaultBatchConfigurer and autowire your datasource via a qualifier:
#Component
public class MyBatchConfigurer extends DefaultBatchConfigurer {
/**
* Initialize the BatchConfigurer to use the datasource of your choosing
* #param firstDataSource
*/
#Autowired
public MyBatchConfigurer(#Qualifier("firstDataSource") DataSource firstDataSource) {
super(firstDataSource);
}
}
Side Note (as this also deals with the use of multiple data sources): If you use autoconfig to run data initialization scripts, you may notice that it's not initializing on the datasource you'd expect. For that issue, take a look at this: https://github.com/spring-projects/spring-boot/issues/9528
You can define below beans and make sure you application.properties file has entries needed for
#Configuration
#PropertySource("classpath:application.properties")
public class DataSourceConfig {
#Primary
#Bean(name = "abcDataSource")
#ConfigurationProperties(prefix = "abc.datasource")
public DataSource dataSource() {
return DataSourceBuilder.create().type(HikariDataSource.class).build();
}
#Bean(name = "xyzDataSource")
#ConfigurationProperties(prefix = "xyz.datasource")
public DataSource xyzDataSource() {
return DataSourceBuilder.create().type(HikariDataSource.class).build();
}
}
application.properties
abc.datasource.jdbc-url=XXXXX
abc.datasource.username=XXXXX
abc.datasource.password=xxxxx
abc.datasource.driver-class-name=org.postgresql.Driver
...........
...........
...........
...........
Here you can refer: Spring Boot Configure and Use Two DataSources
First, create a custom BatchConfigurer
#Configuration
#Component
public class TwoDataSourcesBatchConfigurer implements BatchConfigurer {
#Autowired
#Qualifier("dataSource1")
DataSource dataSource;
#Override
public JobExplorer getJobExplorer() throws Exception {
...
}
#Override
public JobLauncher getJobLauncher() throws Exception {
...
}
#Override
public JobRepository getJobRepository() throws Exception {
JobRepositoryFactoryBean factory = new JobRepositoryFactoryBean();
// use the autowired data source
factory.setDataSource(dataSource);
factory.setTransactionManager(getTransactionManager());
factory.afterPropertiesSet();
return factory.getObject();
}
#Override
public PlatformTransactionManager getTransactionManager() throws Exception {
...
}
}
Then,
#Configuration
#EnableBatchProcessing
#ComponentScan("package")
public class JobConfig {
// define job, step, ...
}