Spring transaction whit several operations and rollback - java

I have to write some methods to change values into database and make some operations on file system.
So I have to make this sequence of step:
Set the boolean Updating field to true into database. It is used to avoid access to file system and database information that are linked with this value (for example a fleet of cars)
Make some operation on the database. For example change the date, name, value or other fields. These changes affect more database tables.
Make change to file system and database
Set the boolean Updating to false
As you can imagine I have to manage errors and start rollback procedure to restore database and file system.
I have some doubt about how I can write my method. I have:
The entity
The repository interface that extends JpaRepositoryand has Query creation from method names and #Query annotated with #Transactional if them write into database (otherwise I recevied error)
The service interface
The service implementation that contains all the method to make simple changes to database. This class is annotated with #Transactional
From the other classes I call service methods to use database but if I call some of these methods I write each value into database so it isn't possible to throw rollback, or I wrong?
The step 1 has to be write immediatly into database instead the other changes should be use #Transactional properties, but just adding #Transactional to my method is enough? For file system rollback I create a backup of all subfolders and restore them in case of error.
For example:
#Transactional(rollbackFor=FileSystemException.class)
private void changeDisplacement(int idApplication, int idDisplacement){
applicationServices.setUpdating(true); //this has be to write immediatly into database so that the other methods can stop using this application
Application application = applicationServices.getId(idApplication);
application.setDisplacement(displacementServices.getId(idDisplacement));
//OTHER OPERATIONS ON DIFFERENT TABLES
//OPERATIONS ON FILE SYSTEM CATCHING ALL EXCEPTION WITH TRY-CATCH AND IN THE CATCH RESTORE FILESYSTEM AND THROW FileSystemException to start database rollback
//In the finally clause use applicationServices.setUpdating(false)
}
Can it work with this logic or the #Transactional field is wrong here?
Thanks

#Transactional is OK here. The only thing is you need to set propagation of applicationServices.setUpdating to REQUIRES_NEW so that it gets committed individually:
public class ApplicationServices {
#Transactional(propagation=Propagation.REQUIRES_NEW)
public void setUpdating(boolean b) {
// update DB here
}
}
In the case of the exceptions, it will still update the DB as long as you have the call to setUpdating in the finally block.

There are multiple questions here and some of them are hard to grasp, here is a bit of input. When you have this:
#Transactional(rollbackFor=FileSystemException.class)
private void changeDisplacement(int idApplication, int idDisplacement){
applicationServices.setUpdating(true);
That flag will hit the database only when the #Transactional finishes. The change stays in hibernate context, until the end of #Transactionl method.
So while you execute changeDisplacement and someone else comes and reads that flag - it will see false (because you have not written it to the DB just yet). You could get it via READ_UNCOMMITTED, but it's up to your application if you allow this.
You could have a method with REQUIRES_NEW and set that flag to true there and in case of revert update that flag back.
Generally updating both the DB and file system is not easy (keeping them in sync). The way I have done it before (might be better options) is register events (once a correct DB was made) and then write to the filesystem.

Related

JPA use flush to trigger exception and halt execution

In a recent task, after I created an object I flushed the result to the database. The database table had a unique constraint, meaning that if I tried to flush the same record for the second time, I would get a ConstraintViolationException. A sample snippet is shown below:
createEntityAndFlush(result);
sendAsyncRequestToThirdSystem(param);
The code for the createEntityAndFlush:
private T createEntityAndFlush(final T entity) throws ServiceException {
log.debug("Persisting {}", entity.getClass().getSimpleName());
getEntityManager().persist(entity);
getEntityManager().flush();
return entity;
}
The reason I used flush was that I wanted to make sure that a ConstraintViolationException would be thrown prior to finishing the transaction and thus calling the sendAsyncRequestToThirdSystem. But that was not the case, since sendAsyncRequestToThirdSystem was called after the exception was thrown.
To test the code in racing conditions, I used the ManagedExecutorService and created two runnable tasks (Future<?> submit(Runnable task)) to replicate the incoming request.
Eventually the problem was solved by trying performing a lock on a new table for each unique request id, but I would like to know where I was wrong in my first approach (ex. wrong use of flash, ManagedExecutorService was responsible for awkward behaviour). Thanks in advance!
The issue is that while flush() does flush the changes into the database, the transaction is still open, and the unique constraint will be checked when the transaction is committed (this may depend on the database, but at least with Postgres and any MVCC using DB).
So you will need to make sure that createEntityAndFlush(result); runs in its own transaction, possibly with a #Transactional(propagation = Propagation.REQUIRES_NEW) (or equivalent, if not using Spring) to see if the unique index is violated.

#Transactional annotation works with saveAndFlush?

I have the following implementation.
#Transactional
public void saveAndGenerateResult(Data data) {
saveDataInTableA(data.someAmountForA);
saveDataInTableB(data.someAmountForB);
callAnAggregatedFunction(data);
}
public void saveDataInTableA(DataA a) {
tableARepository.saveAndFlush(a);
}
public void saveDataInTableA(DataB b) {
tableBRepository.saveAndFlush(b);
}
public void callAnAggregatedFunction() {
// Do something based on the data saved from the beginning in Table A and Table B
}
It is important to use saveAndFlush to have the data immediately available to the callAnAggregatedFunction function to get an aggregated result and save it to another table. That is why I am not using save function which does not flush the transactions into database immediately as far as I know.
However, I am using a #Transactional annotation over the function saveAndGenerateResult, as I want to rollback the database transactions that I have done in that function in case of any failure which is normally ensured by having a #Transactional annotation over a method.
What will be the scenario in this specific case? I am using saveAndFlush which flushes the data immediately into the database table and if the last function (i.e. callAnAggregatedFunction) fails to write the data into the table, will the previous write operations in table A and table B will be rollbacked?
Will the previous write operations in table A and table B be rollbacked?
Yes, unless your saveAndFlush() methods have their own transactions (i.e. with propagation = REQUIRES_NEW).
If they're all part of the transaction you started in saveAndGenerateResult(), all modifications made to the database will be rolled back in case of failure.
For more information: Spring - #Transactional - What happens in background?
Spring #Transactional - isolation, propagation

How to Hibernate Batch Insert with real time data? Use #Transactional or not?

I am trying to perform batch inserts with data that is currently being inserted to DB one statement per transaction. Transaction code statement looks similar to below. Currently, addHolding() method is being called for each quote that comes in from an external feed, and each of these quote updates happens about 150 times per second.
public class HoldingServiceImpl {
#Autowired
private HoldingDAO holdingDao;
#Transactional(propagation = Propagation.REQUIRES_NEW, rollbackFor = Exception.class)
public void addHolding(Quote quote) {
Holding holding = transformQuote(quote);
holdingDao.addHolding(holding);
}
}
And DAO is getting current session from Hibernate SessionFactory and calling save on object.
public class HoldingDAOImpl {
#Autowired
private SessionFactory sessionFactory;
public void addHolding(Holding holding) {
sessionFactory.getCurrentSession().save(holding);
}
}
I have looked at Hibernate batching documentation, but it is not clear from document how I would organize code for batch inserting in this case, since I don't have the full list of data at hand, but rather am waiting for it to stream.
Does merely setting Hibernate batching properties in properties file (e.g. hibernate.jdbc.batch_size=20) "magically" batch insert these? Or will I need to, say, capture each quote update in a synchronized list, and then insert list load and clear list when batch size limit reached?
Also, the whole purpose of implementing batching is to see if performance improves. If there is better way to handle inserts in this scenario, let me know.
Setting the property hibernate.jdbc.batch_size=20 is an indication for the hibernate to Flush the objects after 20. In your case hibernate automatically calls sessionfactory.flush() after 20 records saved.
When u call a sessionFactory.save(), the insert command is only fired to in-memory hibernate cache. Only once the Flush is called hibernate synchronizes these changes with the Database. Hence setting hibernate batch size is enough to do batch inserts. Fine tune the Batch size according to your needs.
Also make sure your transactions are handled properly. If you commit a transaction also forces hibernate to flush the session.

Spring #Transactional DAO calls return same object

We are using Spring and IBatis and I have discovered something interesting in the way a service method with #Transactional handles multiple DAO calls that return the same record. Here is an example of a method that does not work.
#Transactional
public void processIndividualTrans(IndvTrans trans) {
Individual individual = individualDAO.selectByPrimaryKey(trans.getPartyId());
individual.setFirstName(trans.getFirstName());
individual.setMiddleName(trans.getMiddleName());
individual.setLastName(trans.getLastName());
Individual oldIndvRecord = individualDAO.selectByPrimaryKey(trans.getPartyId());
individualHistoryDAO.insert(oldIndvRecord);
individualDAO.updateByPrimaryKey(individual);
}
The problem with the above method is that the 2nd execution of the line
individualDAO.selectByPrimaryKey(trans.getPartyId())
returns the exact object returned from the first call.
This means that oldIndvRecord and individual are the same object, and the line
individualHistoryDAO.insert(oldIndvRecord);
adds a row to the history table that contains the changes (which we do not want).
In order for it to work it must look like this.
#Transactional
public void processIndividualTrans(IndvTrans trans) {
Individual individual = individualDAO.selectByPrimaryKey(trans.getPartyId());
individualHistoryDAO.insert(individual);
individual.setFirstName(trans.getFirstName());
individual.setMiddleName(trans.getMiddleName());
individual.setLastName(trans.getLastName());
individualDAO.updateByPrimaryKey(individual);
}
We wanted to write a service called updateIndividual that we could use for all updates of this table that would store a row in the IndividualHistory table before performing the update.
#Transactional
public void updateIndividual(Individual individual) {
Individual oldIndvRecord = individualDAO.selectByPrimaryKey(trans.getPartyId());
individualHistoryDAO.insert(oldIndvRecord);
individualDAO.updateByPrimaryKey(individual);
}
But it does not store the row as it was before the object changed. We can even explicitly instantiate different objects before the DAO calls and the second one becomes the same object as the first.
I have looked through the Spring documentation and cannot determine why this is happening.
Can anyone explain this?
Is there a setting that can allow the 2nd DAO call to return the database contents and not the previously returned object?
You are using Hibernate as ORM and this behavior is perfectly described in the Hibernate documentation. In the Transaction chapter:
Through Session, which is also a transaction-scoped cache, Hibernate provides repeatable reads for lookup by identifier and entity queries and not reporting queries that return scalar values.
Same goes for IBatis
MyBatis uses two caches: a local cache and a second level cache. Each
time a new session is created MyBatis creates a local cache and
attaches it to the session. Any query executed within the session will
be stored in the local cache so further executions of the same query
with the same input parameters will not hit the database. The local
cache is cleared upon update, commit, rollback and close.

How to get Properties from a Neo4j Database in Server Plugin after Commit?

I have to implement a Neo4j Server Plugin that reacts to changes to the Database and get's information about those changes. I need to get all the Data that has been added, changed and deleted in a transaction. I use a TransactionEventHandler registed to the database. For performance reasons i have to use the afterCommit callback that is called after the changes to the database have been made. This way the transaction will not be held back by the plugin.
Now inside this callback i do something similiar to this:
public void afterCommit(TransactionData data, Void arg1) {
for(Node n:data.createdNodes()) {
String firstkey = n.getPropertyKeys().iterator().next();
}
}
But the getPropertyKeys throws an Exception because the transaction has already been commited. I don't understand why this is a problem, i don't want to change anything to the transaction, i just want properties the node has that has been changed. Is there some way to work around this? What is the reason for the Exception?
The Exception:
java.lang.IllegalStateException: This transaction has already been completed.
at org.neo4j.kernel.impl.api.KernelTransactionImplementation.assertTransactionOpen(KernelTransactionImplementation.java:376)
at org.neo4j.kernel.impl.api.KernelTransactionImplementation.acquireStatement(KernelTransactionImplementation.java:261)
at org.neo4j.kernel.impl.api.KernelTransactionImplementation.acquireStatement(KernelTransactionImplementation.java:80)
at org.neo4j.kernel.impl.core.ThreadToStatementContextBridge.instance(ThreadToStatementContextBridge.java:64)
at org.neo4j.kernel.InternalAbstractGraphDatabase$8.statement(InternalAbstractGraphDatabase.java:785)
at org.neo4j.kernel.impl.core.NodeProxy.getPropertyKeys(NodeProxy.java:358)
at de.example.neo4jVersionControl.ChangeEventListener.afterCommit(ChangeEventListener.java:41)
In afterCommit the transaction has already been committed (hence the name). To access properties from a node you need a transactional context - remember that every operations (even readonly) require this.
The recommended way for implementations of TransactionEventHandlers is to rely on TransactionData only. TransactionData.assignedNodeProperties() will return the properties of the newly created nodes as well.

Categories