My application works with multi datasources and 2 databases Oracle and PostgreSQL (I dont need global transaction) .
I dont know which transaction manager to use. Both have some advantages and disadvantages.
Atomikos suppport global transaction which I dont need and log some information about transaction to file system which I want to avoid:
public void setEnableLogging(boolean enableLogging)
Specifies if disk logging should be enabled or not. Defaults to true.
It is useful for JUnit testing, or to profile code without seeing the
transaction manager's activity as a hot spot but this should never be
disabled on production or data integrity cannot be guaranteed.
advantages is that it use just one transaction manager
When using DataSourceTransactionManager I need one per dataSource
#Bean
#Primary
DataSourceTransactionManager transactionManager1() {
DataSourceTransactionManager transactionManager = new DataSourceTransactionManager();
transactionManager.setDataSource(dataSource1());
return transactionManager;
}
#Bean
DataSourceTransactionManager transactionManager2() {
DataSourceTransactionManager transactionManager = new DataSourceTransactionManager();
transactionManager.setDataSource(dataSource2());
return transactionManager;
}
this is problem because I need to specify name of tm in annotation:
#Transactional("transactionManager1")
public void test() {
}
but I dont know it because in runtime I can switch in application which database to use.
is there some other options or I am missing something in this two transaction manager ?
You should solve this as option 2, using one DataSourceTransactionManager per data source. You will need to keep track of the transaction manager for each data source.
One thing additionally, if you need to be able to rollback transactions on both databases, you will have to set up a ChainedTransactionManager for both.
Related
I have a Spring Boot app, where I use JMS with Database. I'm trying to configure JmsTransactionManager to use with default TransactionManager (for JPA). I defined the bean in the #SpringBootApplication file (that means it has #Configuration and #EnableTransactionManagement):
#Bean(name="jmsTransactionManager")
public JmsTransactionManager jmsTransactionManager(ConnectionFactory connectionFactory) {
JmsTransactionManager jmsTransactionManager = new JmsTransactionManager();
jmsTransactionManager.setConnectionFactory(connectionFactory);
return jmsTransactionManager;
}
That's the only bean I configure by myself for JMS because other configuartion spring-boot does automatically, I just have properties in the application.yaml so I assume connectionFactory will be autowired. And I want it to use like this:
#Transactional(transactionManager = "jmsTransactionManager", propagation = Propagation.REQUIRES_NEW)
void do(){
sendJms();
saveDb();
}
#Transactional // uses default JPA TM
void sendDb(){
...
}
So the logic is that I will send to Jms first, then save something to DB, so I need two separate transactions but I want to close the DB transaction before JMS transaction. Maybe it's not correct to make calls like this in such situation, but I don't know how to do it else using declarative transaction management. And the problem is that when I'm defining JmsTransactionManagement the default one, that works with DB stops working, but without a JmsTransactionManagement transactions to db work:
org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'transactionManager' available: No matching TransactionManager bean found for qualifier 'transactionManager' - neither qualifier
match nor bean name match!
Am I missing something? I have spring-data-jpa in pom so default transactionManager configures by spring boot, but it can't find it, why? Unfortunately didn't find the answer on how to do something like that on the StackOverflow.
I am presuming that you are not using two phase commit (XA) transactions. Essentially in order to chain transactions between multiple transactional resources, both your jms ConnectionFactory and db Datasource have to be an XA resource implementation, and you have to use proper JTA TransactionManager. While it is not particularly hard thing to do, JTA is usually skipped by majority of java programmers as in normal historical situation (spring code deployed in JEE serer) JTA "just works" in background never to be directly accessed by java programmer. In standalone boot application you have to explicitly enable this functionality by providing proper JTA transaction manager and using XA implementation of your resources.
See: https://docs.spring.io/spring-boot/docs/2.0.x/reference/html/boot-features-jta.html
In short JMSTransactionManager and DBTrasnactionManager wont do as you need instance of JTATransactionManager.
I am working with Spring Batch and JPA and I experienced the TransactionManager bean conflict. I found a solution by setting the TransactionManager as JpaTransactionManager in a step. But according to this link (https://github.com/spring-projects/spring-batch/issues/961), it is not correct even though it works for me.
#Autowired
private JpaTransactionManager transactionManager;
private Step buildTaskletStep() {
return stepBuilderFactory.get("SendCampaignStep")
.<UserAccount, UserAccount>chunk(pushServiceConfiguration.getCampaignBatchSize())
.reader(userAccountItemReader)
.processor(userAccountItemProcessor)
.writer(userAccountItemWriter)
.transactionManager(transactionManager)
.build();
}
}
I tried the suggested solution of implementing the BatchConfigurer but it conflicts with me disabling the metadata tables using this code:
#Configuration
#EnableAutoConfiguration
#EnableBatchProcessing
public class BatchConfiguration extends DefaultBatchConfigurer {
#Override
public void setDataSource(DataSource dataSource) {
// override to do not set datasource even if a datasource exist.
// initialize will use a Map based JobRepository (instead of database)
}
}
What would be the problem using the first solution of setting the TransactionManager in a Step?
In Spring Batch, there are two places where a transaction manager is used:
In the proxy created around the JobRepository to create transactional methods when interacting with the job repository
In each step definition to drive the step's transaction
Typically, the same transaction manager is used in both places, but this is not a requirement. It is perfectly fine to use a ResourcelessTransactionManager with the job repository to not store any meta-data and a JpaTransactionManager in the step to persist data in a database.
By default, when you use #EnableBatchProcessing and you provide a DataSource bean, Spring Batch will create a DataSourceTransactionManager and set it in both places, because this is the most typical case. But nothing prevents you from using a different transaction manager for the step. In this case, you should accept that business data and technical meta-data can get out of sync when things go wrong.
That's why the expected way to provide a custom transaction manager is via a custom BatchConfigurer#getTransactionManager, in which case your custom transaction manager is set it in both places. This was not clearly documented, but it has been fixed since v4.1. Here is the section that mentions that: Configuring a JobRepository. This is also mentioned in the Javadoc of #EnableBatchProcessing:
In order to use a custom transaction manager, a custom BatchConfigurer should be provided.
I use Spring Boot Jpa in a standalone GUI (Swing) java application with an embedded H2 database.
I use Spring Boot 1.3.0 and this is my additional configuration:
private static final String dataSourceUrl = "jdbc:h2:./databse;DB_CLOSE_ON_EXIT=FALSE";
#Bean
public DataSource dataSource() {
return DataSourceBuilder.create().url(dataSourceUrl).username("user").password("pwd").build();
}
#Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactory(DataSource dataSource) {
LocalContainerEntityManagerFactoryBean em = new LocalContainerEntityManagerFactoryBean();
em.setDataSource(dataSource);
em.setPackagesToScan(new String[] { "packages.to.scan" });
JpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter();
em.setJpaVendorAdapter(vendorAdapter);
Properties properties = new Properties();
properties.setProperty("hibernate.dialect", "org.hibernate.dialect.H2Dialect");
properties.setProperty("hibernate.hbm2ddl.auto", "update");
em.setJpaProperties(properties);
return em;
}
In my application.properties file I have only one line: spring.aop.proxy-target-class=true.
For my repositories I extend JpaRepository.
Everything is working, the only problem I had recently: On a MAC which was running the application the MAC had some kind of problems and crashed. Afterwards none of the modification which was done before was actually stored in the database. I use the #Transactional annotation to modify data in the database.
I'm not very experienced with databases but after googling around I guess the transactions are cached by the persistence context (not sure if the terminology is correct) and is actually persisted when the application is closed. I checked the database file and made some manipluation through the GUI (includes also some queries) but the modification date of the database file changed only when I closed the application.
As this is a standalone GUI application there will be no performance issues if every transaction will be directly perisisted in the database. Am I on the correct way and how could I achieve that every transaction is directly persisted in the database? Are there any configuration I have to do or do I have to add any code after every call of the save() method of a repository?
If not, I have absolutely no idea how to debug this kind of problems as I have to admit that I'm not pretty sure whats actually going on under the hood..
Hibernate decides on it's own when to write to database (flushing the persistence context) based on optimization parameters and configured flushing strategy.
Maybe you can take a look here and adjust the behavior according to your needs:
https://docs.jboss.org/hibernate/orm/4.0/devguide/en-US/html/ch03.html
Information about the flush modes will also help you:
http://docs.jboss.org/hibernate/orm/4.3/javadocs/org/hibernate/FlushMode.html
Springs #Transactional follows the container managed transaction paradigm. By default if one #Transactional invokes a #Transactional method in another Componet/service/repository the transaction is propagated. When the outermost #Transactional method completes the transaction will be committed to the database.
JPA may flush data to the database multiple time within the same transaction, but everything in the transaction is either committed or rolled back when the transaction completed. If you have #Transactional on a #Controller, the transaction completes after the DispatchServlet has called the handler method (More specifically it happens indside the GCLIB or JDK Proxy which is created using Spring AOP)
We are using Spring with Hibernate to establish transactions with JTA. The PlatformTransactionManager is the JtaTransactionManager which is wired with the TransactionManager and UserTransaction from narayana.
#Bean
#Scope("prototype")
public TransactionManager jbossTransactionManager() {
return jtaPropertyManager.getJTAEnvironmentBean().getTransactionManager();
}
#Bean
#Scope("prototype")
public UserTransaction jbossUserTransaction() {
return jtaPropertyManager.getJTAEnvironmentBean().getUserTransaction();
}
#Bean
public PlatformTransactionManager transactionManager() {
return new JtaTransactionManager(jbossUserTransaction(), jbossTransactionManager());
}
I have noted that JtaTransactionManager has the UT and TM I would want. On JBoss 6 EAP, I noted that my DataSource has been used as a WrapperDataSource and that this was related to a different TM. Specifically, it is using the TransactionManagerDelegate. This appears to be the transaction manager provided by JBoss via the JNDI names java:TransactionManager and java:jboss/TransactionManager. This is preventing my transactions from having transactional boundaries and I leak data on flush. If I remove my configuration and the UT and TM from the container, my transactions transact properly.
What is deciding to use this other TransactionManager? This appears to be the JCA from the container but I do not understand the mechanism for this decision.
Should I remove my UT and TM and surrender control to the
container to give these components to my app and rely on the JTA
platform as is or should I try to gain more control?
the container provides the datasource with a transaction manager from the JCA. This TransactionManager is a different instance than the one we had wired in from Spring. (Our bean had been instantiated from the arjuna environment bean). Using the JtaManager from Spring to get the transaction manager, via JNDI in the default locations, from the container ensured that we have the same transaction manager in the JTA platform used by Hibernate (JBoss App Server in this case).
Before we made this change, the application TransactionManager was in a transaction with Hibernate but the transactionManager on the datasource was not participating which caused the "leak".
Using the same instance has everything working together. This has also been proven out on WebLogic using the same approach.
Spring supports programmatic transaction which give us fine grained control over TX management. According to Spring Documentation, One can use programmatic TX management by:
1. utilizing Spring's TransactionTemplate:
transactionTemplate.execute(new TransactionCallbackWithoutResult() {
protected void doInTransactionWithoutResult(TransactionStatus status) {
try {
updateOperation1();
updateOperation2();
} catch (SomeBusinessExeption ex) {
status.setRollbackOnly();
}
} });
2. leveraging PlatformTransactionManager directly(inject a PlatformTransactionManager implementation into DAO):
DefaultTransactionDefinition def = new DefaultTransactionDefinition();
def.setName("SomeTxName");
def.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRED);
//txManager is a reference to PlatformTransactionManager
TransactionStatus status = txManager.getTransaction(def);
try {
updateOperation1();
updateOperation2();
}
catch (MyException ex) {
txManager.rollback(status);
throw ex;
}
txManager.commit(status);
for the sake of simplification, let's say we are dealing with JDBC database operation.
I am wondering for any database operations happened at updateOperation1(),updateOperation2() in the second snippet, either it is implemented with JDBCTemplate or JDBCDaoSupport, if not, the operation is actually not performed within any transaction, is it?
My analysis is that if we don't use JDBCTemplate or JDBCDaoSupport, we inevitably will create/retrieve connection from datasource management. the connection we get is of course not the connection used by PlatformTransactionManager underlying to manage transaction.
I dug Spring source code and skim related class found that PlatformTransactionManager will try to retrieve a connection contained in ConnectionHolder which in return retrieved from TransactionSynchronizationManager. I also found JDBCTemplate and JDBCDaoSupport, also try to get connection with similar routine from TransactionSynchronizationManager.
Because TransactionSynchronizationManager manages many resource including connection per thread(basically use Threadlocal to ensure one thread get its own unique instance of the managed resource)
So I think the connection retrieved by PlatformTransactionManager and JDBCTemplate or JDBCDaoSupport is just same, this can explain how spring programmatic transaction ensure updateOperation1(),updateOperation2() were guarded by transaction.
Is my analysis correct? if it is, why Spring documentation hasn't emphasized this caveat?
Yes, it's correct.
Any code that uses raw Connections should obtain them from the DataSource in special way in order to participate in transactions managed by Spring (12.3.8 DataSourceTransactionManager):
Application code is required to retrieve the JDBC connection through DataSourceUtils.getConnection(DataSource) instead of Java EE's standard DataSource.getConnection.
Another option (if you cannot change code that calls getConnection()) is to wrap your DataSource with TransactionAwareDataSourceProxy.