I have some functional units of transactional work within service(s) which involved multiple calls to different DAOs (using jdbcTemplate or NamedJdbcTemplate) that return sequence generated IDs to run the next select/insert or run some void insertion in the order of calls. Each DAO call represents a sql statement which will require a call to the Database. For example
#Transactional
public void create() {
playerService.findByUserName(player.getPlayerId());
roleService.insertRoleForPlayer(player.getId(), ROLE_MANAGER);
leagueScoringService.createLeagueScoringForLeague(leagueScoring);
leagueService.insertPlayerLeague(player, code);
Standing playerStanding = new Standing();
playerStanding.setPlayerId(player.getId());
playerStanding.setPlayerUserName(player.getPlayerId());
playerStanding.setLeagueId(leagueId);
standingService.insertStandings(ImmutableList.of(playerStanding));
}
Preferably I want to batch up the calls or ensure they all get executed together on the DB. Note I do not use hibernate and I don't want to either. What is the best approach for this?
Related
I've seen articles saying that we should try to limit the scope of transaction, e.g. instead of doing this:
#Transactional
public void save(User user) {
queryData();
addData();
updateData();
}
We should exclude queryData from the transaction by using Spring's TransactionTemplate (or just move it out of the transactional method):
#Autowired
private TransactionTemplate transactionTemplate;
public void save(final User user) {
queryData();
transactionTemplate.execute((status) => {
addData();
updateData();
return Boolean.TRUE;
})
}
But my understanding is that since JDBC will always need a transaction for all operations, if I use the second way, there will be 2 transactions opened and closed, 1 for queryData (opened by JDBC), and another for codes inside transactionTemplate.execute opened by our class. If so, won't this be a waste of resources now that you've split 1 transaction into 2?
If an transaction starts , it will use up one DB connection. So we generally want the transaction to be completed as fast as possible , and delay to start it as much as we can until we really need to access DB such that the connection pool has more time to provide more available connections for other requests to use.
So if part of the workflow within your function requires to take some time to finish their work and that work is not required to access DB, it is true that it is better to limit the scope of the transaction to exclude this part of the codes.
But in your example, as both transaction are executed in series and both need to access DB , I don't see there are any points to separate them into two different transactions.
Also, in term of Hibernate, it is very normal to load and update the entities in the same transaction such that you do not need to deal with the detached entities if the entities that you update are loaded from another already closed transaction. Dealing with detached entities is not easy if you are not familiar with Hibernate.
Sample Scenario
I have a limit that controls the total value of a column. If I make a save that exceeds this limit, I want it to throw an exception. For example;
Suppose I have already added the following data: LIMIT = 20
id
code
value
1
A
15
2
A
5
3
B
12
4
B
3
If I insert (A,2) it exceeds the limit and I want to get exception
If I insert (B,4) the transaction should be successful since it didn't exceed the limit
code and value are interrelated
What can I do
I can check this scenario with required queries. For example, I write a method for it and I can check it in the save method. That's it.
However, I'm looking for a more useful solution than this
For example, is there any annotation when designing Entity ?
Can I do this without calling the method that provides this control every time ?
What examples can I give ?
#UniqueConstraint checking if it adds the same values
Using transaction
The most common and long-accepted way is to simply abstract in a suitable form (in a class, a library, a service, ...) the business rules that govern the behavior you describe, within a transaction:
#Transactional(propagation = Propagation.REQUIRED)
public RetType operation(ReqType args) {
...
perform operations;
...
if(fail post conditions)
throw ...;
...
}
In this case, if when calling a method there is already an open transaction, that transaction will be used (and there will be no interlocks), if there is no transaction created, it will create a new one so that both the operations and the postconditions check are performed within the same transaction.
Note that with this strategy both operation and invariant check transactions can combine multiple transactional states managed by the TransactionManager (e.g. Redis, MySQL, MQS, ... simultaneously and in a coordinated manner).
Using only the database
It has not been used for a long time (in favor of the first way) but using TRIGGERS was the canonical option used some decades ago to check postconditions, but this solution is usually coupled to the specific database engine (e.g. in PostgreSQL or MySQL).
It could be useful in the case where the client making the modifications is unable or unwilling (not safe) to check postconditions (e.g. bash processes) within a transaction. But nowadays it is infrequent.
The use of TRIGGERS may also be preferable in certain scenarios where efficiency is required, as there are certain optimization options within the database scripts.
Neither Hibernate nor Spring Data JPA have anything built-in for this scenario. You have to program the transaction logic in your repository yourself:
#PersistenceContext
EntityManager em;
public addValue(String code, int value) {
var checkQuery = em.createQuery("SELECT SUM(value) FROM Entity WHERE code = :code", Integer.class);
checkQuery.setParameter("code", code);
if (checkQuery.getSingleResult() + value > 20) {
throw new LimitExceededException("attempted to exceed limit for " + code);
}
var newEntity = new Entity();
newEntity.setCode(code);
newEntity.setValue(value);
em.persist(newEntity);
}
Then (it's important!) you have to define SERIALIZABLE isolation level on the #Transactional annotations for the methods that work with this table.
Read more about serializable isolation level here, they have an oddly similar example.
Note that you have to consider retrying the failed transaction. No idea how to do this with Spring though.
You should use a singleton (javax/ejb/Singleton)
#Singleton
public class Register {
#Lock(LockType.WRITE)
public register(String code, int value) {
if(i_can_insert_modify(code, value)) {
//use entityManager or some dao
} else {
//do something
}
}
}
We have a method that has reads and writes to MySql, the method can be called by multiple threads. The db operations are like:
public List<Record> getAndUpdate() {
Task task = taskMapper.selectByPrimaryKey(id);
if (task.getStatus() == 0) {
insertRecords();
task.setStatus(1);
taskMapper.update(task);
}
// some queries and return data
return someRecordMapper.selectByXXX();
}
private void insertRecords() {
// read some files and create someRecords
someRecordMapper.insertBatch(someRecords);
}
The method reads a task's status, if the status is 0, it then inserts a bunch of records (of that task) to the Records table, and then set the status of the task to 1.
I want those DB operations to be transactional and exclusive, meaning that when one thread enters the transaction, other threads trying to read the same
task should block. Otherwise, they will see task status as 0 and insertRecords() will be called multiple times, resulting in duplicated data.
The #Transactional annotation doesn't seem to block transactions from other threads, it only ensures rollback in case of abortion. So I think with #Transactional alone, the above issue cannot be avoided.
I'm using MySql with mybatis, I think MySql itself can achieve such synchronization between threads so I try not to introduce extra components such as redis lock to do it. I wonder how can I do it in Spring?
I ended up using the "SELECT ... FOR UPDATE" query. With this query executed, all the other reads/writes are locked until the current transaction commits or gets rolled back. Also need to annotate the method with #Transactional. But the row lock and the transaction here are 2 different concerns. The test results is satisfactory.
I have to write some methods to change values into database and make some operations on file system.
So I have to make this sequence of step:
Set the boolean Updating field to true into database. It is used to avoid access to file system and database information that are linked with this value (for example a fleet of cars)
Make some operation on the database. For example change the date, name, value or other fields. These changes affect more database tables.
Make change to file system and database
Set the boolean Updating to false
As you can imagine I have to manage errors and start rollback procedure to restore database and file system.
I have some doubt about how I can write my method. I have:
The entity
The repository interface that extends JpaRepositoryand has Query creation from method names and #Query annotated with #Transactional if them write into database (otherwise I recevied error)
The service interface
The service implementation that contains all the method to make simple changes to database. This class is annotated with #Transactional
From the other classes I call service methods to use database but if I call some of these methods I write each value into database so it isn't possible to throw rollback, or I wrong?
The step 1 has to be write immediatly into database instead the other changes should be use #Transactional properties, but just adding #Transactional to my method is enough? For file system rollback I create a backup of all subfolders and restore them in case of error.
For example:
#Transactional(rollbackFor=FileSystemException.class)
private void changeDisplacement(int idApplication, int idDisplacement){
applicationServices.setUpdating(true); //this has be to write immediatly into database so that the other methods can stop using this application
Application application = applicationServices.getId(idApplication);
application.setDisplacement(displacementServices.getId(idDisplacement));
//OTHER OPERATIONS ON DIFFERENT TABLES
//OPERATIONS ON FILE SYSTEM CATCHING ALL EXCEPTION WITH TRY-CATCH AND IN THE CATCH RESTORE FILESYSTEM AND THROW FileSystemException to start database rollback
//In the finally clause use applicationServices.setUpdating(false)
}
Can it work with this logic or the #Transactional field is wrong here?
Thanks
#Transactional is OK here. The only thing is you need to set propagation of applicationServices.setUpdating to REQUIRES_NEW so that it gets committed individually:
public class ApplicationServices {
#Transactional(propagation=Propagation.REQUIRES_NEW)
public void setUpdating(boolean b) {
// update DB here
}
}
In the case of the exceptions, it will still update the DB as long as you have the call to setUpdating in the finally block.
There are multiple questions here and some of them are hard to grasp, here is a bit of input. When you have this:
#Transactional(rollbackFor=FileSystemException.class)
private void changeDisplacement(int idApplication, int idDisplacement){
applicationServices.setUpdating(true);
That flag will hit the database only when the #Transactional finishes. The change stays in hibernate context, until the end of #Transactionl method.
So while you execute changeDisplacement and someone else comes and reads that flag - it will see false (because you have not written it to the DB just yet). You could get it via READ_UNCOMMITTED, but it's up to your application if you allow this.
You could have a method with REQUIRES_NEW and set that flag to true there and in case of revert update that flag back.
Generally updating both the DB and file system is not easy (keeping them in sync). The way I have done it before (might be better options) is register events (once a correct DB was made) and then write to the filesystem.
We are using Spring and IBatis and I have discovered something interesting in the way a service method with #Transactional handles multiple DAO calls that return the same record. Here is an example of a method that does not work.
#Transactional
public void processIndividualTrans(IndvTrans trans) {
Individual individual = individualDAO.selectByPrimaryKey(trans.getPartyId());
individual.setFirstName(trans.getFirstName());
individual.setMiddleName(trans.getMiddleName());
individual.setLastName(trans.getLastName());
Individual oldIndvRecord = individualDAO.selectByPrimaryKey(trans.getPartyId());
individualHistoryDAO.insert(oldIndvRecord);
individualDAO.updateByPrimaryKey(individual);
}
The problem with the above method is that the 2nd execution of the line
individualDAO.selectByPrimaryKey(trans.getPartyId())
returns the exact object returned from the first call.
This means that oldIndvRecord and individual are the same object, and the line
individualHistoryDAO.insert(oldIndvRecord);
adds a row to the history table that contains the changes (which we do not want).
In order for it to work it must look like this.
#Transactional
public void processIndividualTrans(IndvTrans trans) {
Individual individual = individualDAO.selectByPrimaryKey(trans.getPartyId());
individualHistoryDAO.insert(individual);
individual.setFirstName(trans.getFirstName());
individual.setMiddleName(trans.getMiddleName());
individual.setLastName(trans.getLastName());
individualDAO.updateByPrimaryKey(individual);
}
We wanted to write a service called updateIndividual that we could use for all updates of this table that would store a row in the IndividualHistory table before performing the update.
#Transactional
public void updateIndividual(Individual individual) {
Individual oldIndvRecord = individualDAO.selectByPrimaryKey(trans.getPartyId());
individualHistoryDAO.insert(oldIndvRecord);
individualDAO.updateByPrimaryKey(individual);
}
But it does not store the row as it was before the object changed. We can even explicitly instantiate different objects before the DAO calls and the second one becomes the same object as the first.
I have looked through the Spring documentation and cannot determine why this is happening.
Can anyone explain this?
Is there a setting that can allow the 2nd DAO call to return the database contents and not the previously returned object?
You are using Hibernate as ORM and this behavior is perfectly described in the Hibernate documentation. In the Transaction chapter:
Through Session, which is also a transaction-scoped cache, Hibernate provides repeatable reads for lookup by identifier and entity queries and not reporting queries that return scalar values.
Same goes for IBatis
MyBatis uses two caches: a local cache and a second level cache. Each
time a new session is created MyBatis creates a local cache and
attaches it to the session. Any query executed within the session will
be stored in the local cache so further executions of the same query
with the same input parameters will not hit the database. The local
cache is cleared upon update, commit, rollback and close.