I'm implementing a JDBC database access API (basically a wrapper) and I'm usnig Spring JdbcTemplate with PlatformTransactionManager to handle transactional operations. Everything looks ok, but I cannot understand how jdbcTemplate manage concurrent transactions.
I'll give you a simplified example based on the creation of students to make my point. Let's create 2 students, John and Jack. The first without erros and the seconds with one error, there's the steps and the code below.
John starts a transaction
Execute John insert without commit
Wait for Jack insert
Jack starts a transaction
Execute Jack insert with an error (age as null but database required NON - NULL)
Rollback Jack transaction
Commit John trasaction
StudentDAO
public class StudentJDBCTemplate implements StudentDAO {
private DataSource dataSource;
private JdbcTemplate jdbcTemplateObject;
private PlatformTransactionManager transactionManager;
// constructor, getters and setters
public TransactionStatus startTransaction() throws TransactionException {
TransactionDefinition def = new DefaultTransactionDefinition();
transactionManager.getTransaction(def);
}
public void commitTransaction(TransactionStatus status) throws TransactionException {
transactionManager.commit(status);
}
public void rollbackTransaction(TransactionStatus status) throws TransactionException {
transactionManager.rollback(status);
}
public void create(String name, Integer age){
String SQL1 = "insert into Student (name, age) values (?, ?)";
jdbcTemplateObject.update( SQL1, name, age);
return;
}
}
MainApp
public class MainApp {
public static void main(String[] args){
// setup db connection etc
StudentJDBCTemplate studentDao = new StudentJDBCTemplate();
TransactionStatus txJohn = studentDao.startTransaction();
TransactionStatus txJack = studentDao.startTransaction();
studentDao.create("John", 20);
try {
studentDao.create("Jack", null); // **FORCE EXCEPTION**
} catch(Exception e){
studentDao.rollback(txJack);
}
studentDao.commit(txJohn);
}
}
How JdbcTemplate knows that 1 transaction is ok but the other is not? From my undertanding, despite we have created 2 transactions, JdbcTemplate will rollback Jack AND John transactions, because query, execute and update methods does not require TransactionStatus as a parameter. That means that Spring jdbcTemplate only supports 1 transaction at time?!
All the operations in a single transaction are always executed as a single unit so either all will be committed or none.
If John starts a transaction which insert and then update then either both (insert and update) will succeed or none and will not be impacted by the transaction started by Jack.
Now how the concurrent transactions interfere with each other is controlled by isolation level i.e. how a transaction sees data modified by another concurrent transaction.
Related
I am facing some issue with transactions configured in my code.
Below is the code with transactions which writes data to DB.
Writer.java
class Writer {
#Inject
private SomeDAO someDAO;
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void write(){
this.batchWrite();
}
private void batchWrite () {
try {
someDAO.writeToTable1(true);
} catch(Exception ex) {
someDAO.writeToTable1(false);
}
someDAO.writeToTable2();
}
}
SomeDAO.java
class SomeDAO {
#Inject
private JdbcTemplate JdbcTemplate;
public void writeToTable1(boolean flag) {
// Writes data to table 1 using jdbcTemplate
jdbcTemplate.update();
}
pulic void writeToTable2() {
// Writes data to table 2 using jdbcTemplate
jdbcTemplate.update();
}
}
Here data is getting stored into table 1 properly but sometimes, table 2 is getting skipped.
I am not sure how this is happening as both the tables have been written within same transaction.
Either transaction is partially committing the data or partially rolling back.
I have doubt that in the SomeDAO class I am injecting JdbcTemplate object which is creating new connection instead of using existing connection of transaction.
Can anyone please help me here?
Try binding a Transaction Manager bean having your jdbcTemplate inside #Transactional:
//Create a bean
#Bean
public PlatformTransactionManager txnManager() throws Exception {
return new DataSourceTransactionManager(jdbcTemplate().getDataSource());
}
And then use this transaction manager in #Transactional("txnManager").
I have a spring boot application that connects with two databases simultaneously. I read the data from the first database proceed with processing and then write other data to a second database.
However, in my service I have a problem with rollback in case of exception during processing. Rollback only first database and not second too.
This is code in my service
#Service
public class Service {
#Transactional(transactionManager = "firstDatabase", rollbackFor = Exception.class)
public void methodA() {
InObjectEntity object = repositoryIN.findObject(); // TABLE_1
object.setProcessed("Y");
repository.save(object);
methodB(object);
}
#Transactional(transactionManager = "secondDatabase", rollbackFor = Exception.class)
public void methodB(InObjectEntity in) {
OutObjectEntity out = new OutObjectEntity(); // TABLE_2
out.setValue(object.getValue());
methodC();
repositoryOUT.save(out);
// exeception raised here!!!!!
throw new Exception();
}
private void methodC() {
AnotherObjectEntity another = new AnotherObjectEntity(); // TABLE_3
another.setValue("new value");
repositoryOUT.save(another);
}
}
When exception is raised records in TABLE_2 and TABLE_3 are stored/changed anyway but record in TABLE_1 is not stored/changed (execute rollback).
Did I do something wrong?
Is #Transactional support for NamedParameterTemplate.batchUpdate?
If something went wrong during the batch execution, will it rollback as expected? Personally, I am not experienced that. That's why I am asking.
Is there any document to check #Transactional supported methods.
public class JdbcActorDao implements ActorDao {
private NamedParameterTemplate namedParameterJdbcTemplate;
public void setDataSource(DataSource dataSource) {
this.namedParameterJdbcTemplate = new NamedParameterJdbcTemplate(dataSource);
}
#Transactional
public int[] batchUpdate(List<Actor> actors) {
return this.namedParameterJdbcTemplate.batchUpdate(
"update t_actor set first_name = :firstName, last_name = :lastName where id = :id",
SqlParameterSourceUtils.createBatch(actors));
}
// ... additional methods
}
NamedParameterTemplate is just an abstraction around Jdbc. In spring it is the Transaction Manager that is responsible for managing transactions, not that you can not do it via plain JDBC but this is the spring way. Spring uses AOP internaly to inspect the annotated methods and delegates its transaction managment. But this role is separate from the NamedParameterTemplate.
So you can use it freely and annotate your methods as long as they are within a Spring managed component/bean with #Transactional
I am trying to implement logging in DB table using Spring AOP. By "logging in table" I mean to write in special log table information about record that was CREATED/UPDATED/DELETED in usual table for domain object.
I wrote some part of the code and all is working good except one thing - when transaction is rolled back then changes in log table still commit successfully. It's strange for me because in my AOP advice the same transaction is using as in my business and DAO layer. (From my AOP advice I called methods of special manager class with Transaction propagation MANDATORY and also I checked transaction name TransactionSynchronizationManager.getCurrentTransactionName() in business layer, dao layer and AOP advice and it is the same).
Does anyone tried to implement similar things in practice? Is it possible to use in AOP advice the same transaction as in the business layer and rollback changes made in AOP advice if some error in business layer occurs?
Thank you in advance for unswers.
EDIT
I want to clarify that problem with rollback occurs only for changes that was made from AOP advice. All changes that is made in DAO layer are rollbacked successfully. I mean that, for example, if some exception is thrown then changes made in DAO layer will be successfully rollbacked, but in log table information will be saved (commited). But I can't understand why it is like that because as I wrote above in AOP advice the same transaction is using.
EDIT 2
I checked with debugger the piece of the code where I am writting to the log table in AOP advice and it seems to me that JdbcTemplate's update method executes outside transaction because changes had been commited to the DB directly after execution of the statement and before transactional method was finished.
EDIT 3
I solved this problem. Actually, that was my stupid fault. I'm using MySQL. After creation of the log table I did't change DB engine and HeidySQL set MyIsam by default. But MyIsam doesn't support transaction so I changed DB engine to InnoDB (as for all other tables) and now all is working perfectly.
Thank you all for help and sorry for disturbing.
If someone is interested, here is simplified example that illustrate my approach.
Consider DAO class that has save method:
#Repository(value="jdbcUserDAO")
#Transactional(propagation=Propagation.SUPPORTS, readOnly=true, rollbackFor=Exception.class)
public class JdbcUserDAO implements UserDAO {
#Autowired
private JdbcTemplate jdbcTemplate;
#LoggedOperation(affectedRows = AffectedRows.ONE, loggedEntityClass = User.class, operationName = OperationName.CREATE)
#Transactional(propagation=Propagation.REQUIRED, readOnly=false, rollbackFor=Exception.class)
#Override
public User save(final User user) {
if (user == null || user.getRole() == null) {
throw new IllegalArgumentException("Input User object or nested Role object should not be null");
}
KeyHolder keyHolder = new GeneratedKeyHolder();
jdbcTemplate.update(new PreparedStatementCreator() {
#Override
public PreparedStatement createPreparedStatement(Connection connection)
throws SQLException {
PreparedStatement ps = connection.prepareStatement(SQL_INSERT_USER, new String[]{"ID"});
ps.setString(1, user.getUsername());
ps.setString(2, user.getPassword());
ps.setString(3, user.getFullName());
ps.setLong(4, user.getRole().getId());
ps.setString(5, user.geteMail());
return ps;
}
}, keyHolder);
user.setId((Long) keyHolder.getKey());
VacationDays vacationDays = user.getVacationDays();
vacationDays.setId(user.getId());
// Create related vacation days record.
vacationDaysDAO.save(vacationDays);
user.setVacationDays(vacationDays);
return user;
}
}
Here is how aspect looks like:
#Component
#Aspect
#Order(2)
public class DBLoggingAspect {
#Autowired
private DBLogManager dbLogManager;
#Around(value = "execution(* com.crediteuropebank.vacationsmanager.server.dao..*.*(..)) " +
"&& #annotation(loggedOperation)", argNames="loggedOperation")
public Object doOperation(final ProceedingJoinPoint joinPoint,
final LoggedOperation loggedOperation) throws Throwable {
Object[] arguments = joinPoint.getArgs();
/*
* This should be called before logging operation.
*/
Object retVal = joinPoint.proceed();
// Execute logging action
dbLogManager.logOperation(arguments,
loggedOperation);
return retVal;
}
}
And here is how my db log manager class LooksLike:
#Component("dbLogManager")
public class DBLogManager {
#Autowired
private JdbcTemplate jdbcTemplate;
#InjectLogger
private Logger logger;
#Transactional(rollbackFor={Exception.class}, propagation=Propagation.MANDATORY, readOnly=false)
public void logOperation(final Object[] inputArguments, final LoggedOperation loggedOperation) {
try {
/*
* Prepare query and array of the arguments
*/
jdbcTemplate.update(insertQuery.toString(),
insertedValues);
} catch (Exception e) {
StringBuilder sb = new StringBuilder();
// Prepare log string
logger.error(sb.toString(), e);
}
}
It could be to do with the order of the advice - you would want your #Transaction related advice to take effect around(or before and after) your logging related advice. If you are using Spring AOP you can probably control it using the order attribute of the advice - give your transaction related advice the highest precedence so that it executes last on the way out.
Nothing to do with AOP, set datasource property autocommit to false like :
<bean id="datasource" ...>
<property name="autoCommit" value="false/>
</bean>
If you are using xml configuration
Is there a best-practice pattern for completely resetting a database to a freshly-paved schema with JPA before a unit test? I have been using a testing persistence unit with hbml2ddl.auto=create-or-drop and recreating EMFs before each test, but I wonder if there's a cleaner way to do it.
Unit tests should not talk to the database.
Assuming you're writing an integration test for your data access layer, you could use a tool like DBUnit, or you could create a static test helper that programmatically resets your database state by doing all of your deletes and inserts using JPA queries inside of a transaction.
Resetting the database is not a big problem if you use a fast Java database such as the H2 database or HSQLDB. Compared to using Oracle / MySQL (or whatever you use for production) this will speed up your tests, and it will ensure all your code is tested as when using the 'real' production database.
For maximum performance, you can use H2 in-memory (that way you may not have to reset the database manually - it's reset automatically if the connection is closed), or you can use a regular persistent database. To reset the database after use in H2, run the (native) statement 'drop all objects delete files'.
DBUnit has much of what you need, I use Springs Testing framework to rollback, transactions after each test see http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/testing.html
Is there a best-practice pattern for completely resetting a database to a freshly-paved schema with JPA before a unit test?
Don't reset the whole DB schema before every unit test, but reset the "DB environment (which is specific to the current unit test)" at END of each unit test.
We have an entity...
#Entity
public class Candidate implements {
private String id;
private String userName;
private EntityLifeCycle lifeCycle;
protected Candidate() {
}
public Candidate(String userName) {
this.userName = userName;
}
#Id #GeneratedValue(generator="uuid", strategy=GenerationType.AUTO)
#GenericGenerator(name="uuid", strategy="uuid", parameters = {})
#Column(name="candidate_id", nullable=false, unique=true)
public String getId() {
return id;
}
protected void setId(String id) {
this.id = id;
}
#Column(name="user_name", nullable=false, unique=true)
public String getUserName() {
return userName;
}
public void setUserName(String userName) {
this.userName = userName;
}
#Embedded
public EntityLifeCycle getLifeCycle() {
if(lifeCycle == null) {
lifeCycle = new EntityLifeCycleImpl();
}
return lifeCycle;
}
public void setLifeCycle(EntityLifeCycleImpl lifeCycle) {
this.lifeCycle = lifeCycle;
}
#PrePersist
public void prePersist() {
lifeCycle.setCreatedDate(new Date());
}
}
We are setting the createdDate for each Candidate instance in prePersist() method. Here is a test case that asserts that createdDate is getting set properly....
public class EntityLifeCycleTest {
#Test
public void testLifeCycle() {
EntityManager manager = entityManagerFactory.createEntityManager();
Candidate bond = new Candidate("Miss. Bond");
EntityTransaction tx = manager.getTransaction();
tx.begin();
manager.persist(bond);
tx.commit();
Assert.assertNotNull(bond.getLifeCycle().getCreatedDate());
manager.close();
}
}
This test case will run properly for the first time. But if we run this test case second time it would throw ConstraintViolationException, because the userName is unique key.
Therefore, I think the right approach is to "clean the DB environment (which is specific to the current unit test)" at end of each test case. Like this...
public class EntityLifeCycleTest extends JavaPersistenceTest {
#Test
public void testLifeCycle() {
EntityManager manager = entityManagerFactory.createEntityManager();
Candidate bond = new Candidate("Miss. Bond");
EntityTransaction tx = manager.getTransaction();
tx.begin();
manager.persist(bond);
tx.commit();
Assert.assertNotNull(bond.getLifeCycle().getCreatedDate());
/* delete Candidate bond, so next time we can run this test case successfully*/
tx = manager.getTransaction();
tx.begin();
manager.remove(bond);
tx.commit();
manager.close();
}
}
I have been using a testing persistence unit with hbml2ddl.auto=create-or-drop and recreating EMFs before each test, but I wonder if there's a cleaner way to do it.
Recreating EMF before each test is time consuming, IMO.
Drop and recreate the DB schema only if you have made some changes to #Entity annotated class that affects the underlying DB (e.g. adding/removing columns and/or constraints). So first validate the schema, if the schema is valid don't recreate it, and if invalid then recreate it. Like this...
public class JavaPersistenceTest {
protected static EntityManagerFactory entityManagerFactory;
#BeforeClass
public static void setUp() throws Exception {
if(entityManagerFactory == null) {
Map<String, String> properties = new HashMap<String, String>(1);
try {
properties.put("hibernate.hbm2ddl.auto", "validate");
entityManagerFactory = Persistence.createEntityManagerFactory("default", properties);
} catch (PersistenceException e) {
e.printStackTrace();
properties.put("hibernate.hbm2ddl.auto", "create");
entityManagerFactory = Persistence.createEntityManagerFactory("default", properties);
}
}
}
}
Now, if you run all the test cases(that extend JavaPersistenceTest) in one go, the EMF will be created only once(or twice if the schema was invalid).