I have the following implementation.
#Transactional
public void saveAndGenerateResult(Data data) {
saveDataInTableA(data.someAmountForA);
saveDataInTableB(data.someAmountForB);
callAnAggregatedFunction(data);
}
public void saveDataInTableA(DataA a) {
tableARepository.saveAndFlush(a);
}
public void saveDataInTableA(DataB b) {
tableBRepository.saveAndFlush(b);
}
public void callAnAggregatedFunction() {
// Do something based on the data saved from the beginning in Table A and Table B
}
It is important to use saveAndFlush to have the data immediately available to the callAnAggregatedFunction function to get an aggregated result and save it to another table. That is why I am not using save function which does not flush the transactions into database immediately as far as I know.
However, I am using a #Transactional annotation over the function saveAndGenerateResult, as I want to rollback the database transactions that I have done in that function in case of any failure which is normally ensured by having a #Transactional annotation over a method.
What will be the scenario in this specific case? I am using saveAndFlush which flushes the data immediately into the database table and if the last function (i.e. callAnAggregatedFunction) fails to write the data into the table, will the previous write operations in table A and table B will be rollbacked?
Will the previous write operations in table A and table B be rollbacked?
Yes, unless your saveAndFlush() methods have their own transactions (i.e. with propagation = REQUIRES_NEW).
If they're all part of the transaction you started in saveAndGenerateResult(), all modifications made to the database will be rolled back in case of failure.
For more information: Spring - #Transactional - What happens in background?
Spring #Transactional - isolation, propagation
Related
I currently have a use case, where I where if my user manually inserts a data file to be read into the database I need to check if the data exists in the DB. If it does, I want to delete it and then process and save the new file. The problem with this is my methods are marked #Transactional so even though the delete methods are ran, they aren't committed before the save method is called which violates a unique constraint casuing the rollback.
I have tried every level of propagation and also tried splitting them up into two separate transactions where my controller calls them one by one and they don't call each other.
ERROR: org.springframework.transaction.UnexpectedRollbackException: Transaction silently rolled back because it has been marked as rollback-only
CODE:
#Transactional
public void saveAllPositionData(InputStream is) throws IOException {
log.info("Parsing position data...");
ParsingResult parsingResult = positionParser.parse(is);
if (!parsingResult.getPositions().isEmpty()) {
LocalDate businessDate = parsingResult.getPositions().get(0).getBusinessDate();
overwriteData(businessDate);
}
try {
positionRepo.saveAll(bpsParsingResult.getPositions()); // UNIQUE CONSTRAINT FAILS HERE CAUSING ROLLBACK
priceRepo.saveAll(parsingResult.getPrices());
for (PositionTable position : parsingResult.getPositions()) {
if (position.getNumberOfMemos() > 0) memoRepo.saveAll(position.getCorrespondingMemos());
}
} catch (Exception e) {
log.warn("Invalid data returned from BPS parsing job: {}", e.getMessage());
}
}
#Transactional(propagation = Propagation.NESTED) // Tried Propagation.* and no Annotation
public void overwriteData(LocalDate businessDate) {
if (memoRepo.countByBusinessDate(businessDate) > 0) {
log.warn(
"Memo record(s) found by {} business date. Existing data will be overridden.",
businessDate);
memoRepo.deleteByBusinessDate(businessDate);
}
if (positionRepo.countByBusinessDate(businessDate) > 0) {
log.warn(
"Position record(s) found by {} business date. Existing data will be overridden.",
businessDate);
positionRepo.deleteByBusinessDate(businessDate);
}
if (priceRepo.countByBusinessDate(businessDate) > 0) {
log.warn(
"Price record(s) found by {} business date. Existing data will be overridden.",
businessDate);
priceRepo.deleteByBusinessDate(businessDate);
}
}
UnexpectedRollbackException usually happens when an inner #Transactional method throws exception but the outer #Transactional method catch this exception but not re-throw it.(See this for more details). Methods on the JpaRepository actually has #Transactional annotated on it. Now in saveAllPositionData() , some method calls on the JpaRepository throw exception but you catch it and not rethrow it so it causes UnexpectedRollbackException.
Also , #Transactional method does not work if you self calling it from the inner class. That means #Transactional on overwriteData() does not have effect in your codes. (See Method visibility and #Transactional section in docs for more detail)
The problem with this is my methods are marked #Transactional so even
though the delete methods are ran, they aren't committed before the
save method is called which violates a unique constraint casuing the
rollback
You can try to call flush() on the JpaRepository after calling the delete method. It will apply all the pending SQL changes collected so far to the DB but will not commit the transaction. So only the transaction involved will see the records are deleted such that when you insert the data in the same transaction later , you should not encounter unique constraint violation .
Because you are calling overwriteData() (#Transactional) from the transactional method saveAllPositionData() a new transaction was not created but it was executed in the same transaction. This means just like you said
even though the delete methods are run, they aren't committed before the save method is called which violates a unique constraint causing the rollback.
The following illustrates the above situation, where UserService is a class and invoice is a transactional method which the createPDF inner method which is also transactional.
Spring creates that transactional UserService proxy for you, but once
you are inside the UserService class and call other inner methods,
there is no more proxy involved. This means, no new transaction for
you.
One way to get around this is self-injection or here
Another is to keep both the methods in different class.
So I have this method:
#Transactional
public void savePostTitle(Long postId, String title) {
Post post = postRepository.findOne(postId);
post.setTitle(title);
}
As per this post:
The save method serves no purpose. Even if we remove it, Hibernate
will still issue the UPDATE statement since the entity is managed and
any state change is propagated as long as the currently running
EntityManager is open.
and indeed the update statement is issued, but if I run the method without the #Transactional annotation:
public void savePostTitle(Long postId, String title) {
Post post = postRepository.findOne(postId);
post.setTitle(title);
}
Hibernate will not issue the update statement so one has to call postRepository.save(post);explicitly.
What is the difference between using #Transactional or not in this specific scenario?
In a standard configuration, the scope of a persistence context is bound to the transaction.
If you don't have an explicit transaction defined by means of the annotation your (non-existing) transaction span just the reading call to the database.
After that the entity just loaded is not managed.
This means changes to it won't get tracked nor saved.
Flushing won't help because there are no changes tracked.
I am trying to perform batch inserts with data that is currently being inserted to DB one statement per transaction. Transaction code statement looks similar to below. Currently, addHolding() method is being called for each quote that comes in from an external feed, and each of these quote updates happens about 150 times per second.
public class HoldingServiceImpl {
#Autowired
private HoldingDAO holdingDao;
#Transactional(propagation = Propagation.REQUIRES_NEW, rollbackFor = Exception.class)
public void addHolding(Quote quote) {
Holding holding = transformQuote(quote);
holdingDao.addHolding(holding);
}
}
And DAO is getting current session from Hibernate SessionFactory and calling save on object.
public class HoldingDAOImpl {
#Autowired
private SessionFactory sessionFactory;
public void addHolding(Holding holding) {
sessionFactory.getCurrentSession().save(holding);
}
}
I have looked at Hibernate batching documentation, but it is not clear from document how I would organize code for batch inserting in this case, since I don't have the full list of data at hand, but rather am waiting for it to stream.
Does merely setting Hibernate batching properties in properties file (e.g. hibernate.jdbc.batch_size=20) "magically" batch insert these? Or will I need to, say, capture each quote update in a synchronized list, and then insert list load and clear list when batch size limit reached?
Also, the whole purpose of implementing batching is to see if performance improves. If there is better way to handle inserts in this scenario, let me know.
Setting the property hibernate.jdbc.batch_size=20 is an indication for the hibernate to Flush the objects after 20. In your case hibernate automatically calls sessionfactory.flush() after 20 records saved.
When u call a sessionFactory.save(), the insert command is only fired to in-memory hibernate cache. Only once the Flush is called hibernate synchronizes these changes with the Database. Hence setting hibernate batch size is enough to do batch inserts. Fine tune the Batch size according to your needs.
Also make sure your transactions are handled properly. If you commit a transaction also forces hibernate to flush the session.
I have to write some methods to change values into database and make some operations on file system.
So I have to make this sequence of step:
Set the boolean Updating field to true into database. It is used to avoid access to file system and database information that are linked with this value (for example a fleet of cars)
Make some operation on the database. For example change the date, name, value or other fields. These changes affect more database tables.
Make change to file system and database
Set the boolean Updating to false
As you can imagine I have to manage errors and start rollback procedure to restore database and file system.
I have some doubt about how I can write my method. I have:
The entity
The repository interface that extends JpaRepositoryand has Query creation from method names and #Query annotated with #Transactional if them write into database (otherwise I recevied error)
The service interface
The service implementation that contains all the method to make simple changes to database. This class is annotated with #Transactional
From the other classes I call service methods to use database but if I call some of these methods I write each value into database so it isn't possible to throw rollback, or I wrong?
The step 1 has to be write immediatly into database instead the other changes should be use #Transactional properties, but just adding #Transactional to my method is enough? For file system rollback I create a backup of all subfolders and restore them in case of error.
For example:
#Transactional(rollbackFor=FileSystemException.class)
private void changeDisplacement(int idApplication, int idDisplacement){
applicationServices.setUpdating(true); //this has be to write immediatly into database so that the other methods can stop using this application
Application application = applicationServices.getId(idApplication);
application.setDisplacement(displacementServices.getId(idDisplacement));
//OTHER OPERATIONS ON DIFFERENT TABLES
//OPERATIONS ON FILE SYSTEM CATCHING ALL EXCEPTION WITH TRY-CATCH AND IN THE CATCH RESTORE FILESYSTEM AND THROW FileSystemException to start database rollback
//In the finally clause use applicationServices.setUpdating(false)
}
Can it work with this logic or the #Transactional field is wrong here?
Thanks
#Transactional is OK here. The only thing is you need to set propagation of applicationServices.setUpdating to REQUIRES_NEW so that it gets committed individually:
public class ApplicationServices {
#Transactional(propagation=Propagation.REQUIRES_NEW)
public void setUpdating(boolean b) {
// update DB here
}
}
In the case of the exceptions, it will still update the DB as long as you have the call to setUpdating in the finally block.
There are multiple questions here and some of them are hard to grasp, here is a bit of input. When you have this:
#Transactional(rollbackFor=FileSystemException.class)
private void changeDisplacement(int idApplication, int idDisplacement){
applicationServices.setUpdating(true);
That flag will hit the database only when the #Transactional finishes. The change stays in hibernate context, until the end of #Transactionl method.
So while you execute changeDisplacement and someone else comes and reads that flag - it will see false (because you have not written it to the DB just yet). You could get it via READ_UNCOMMITTED, but it's up to your application if you allow this.
You could have a method with REQUIRES_NEW and set that flag to true there and in case of revert update that flag back.
Generally updating both the DB and file system is not easy (keeping them in sync). The way I have done it before (might be better options) is register events (once a correct DB was made) and then write to the filesystem.
We are using Spring and IBatis and I have discovered something interesting in the way a service method with #Transactional handles multiple DAO calls that return the same record. Here is an example of a method that does not work.
#Transactional
public void processIndividualTrans(IndvTrans trans) {
Individual individual = individualDAO.selectByPrimaryKey(trans.getPartyId());
individual.setFirstName(trans.getFirstName());
individual.setMiddleName(trans.getMiddleName());
individual.setLastName(trans.getLastName());
Individual oldIndvRecord = individualDAO.selectByPrimaryKey(trans.getPartyId());
individualHistoryDAO.insert(oldIndvRecord);
individualDAO.updateByPrimaryKey(individual);
}
The problem with the above method is that the 2nd execution of the line
individualDAO.selectByPrimaryKey(trans.getPartyId())
returns the exact object returned from the first call.
This means that oldIndvRecord and individual are the same object, and the line
individualHistoryDAO.insert(oldIndvRecord);
adds a row to the history table that contains the changes (which we do not want).
In order for it to work it must look like this.
#Transactional
public void processIndividualTrans(IndvTrans trans) {
Individual individual = individualDAO.selectByPrimaryKey(trans.getPartyId());
individualHistoryDAO.insert(individual);
individual.setFirstName(trans.getFirstName());
individual.setMiddleName(trans.getMiddleName());
individual.setLastName(trans.getLastName());
individualDAO.updateByPrimaryKey(individual);
}
We wanted to write a service called updateIndividual that we could use for all updates of this table that would store a row in the IndividualHistory table before performing the update.
#Transactional
public void updateIndividual(Individual individual) {
Individual oldIndvRecord = individualDAO.selectByPrimaryKey(trans.getPartyId());
individualHistoryDAO.insert(oldIndvRecord);
individualDAO.updateByPrimaryKey(individual);
}
But it does not store the row as it was before the object changed. We can even explicitly instantiate different objects before the DAO calls and the second one becomes the same object as the first.
I have looked through the Spring documentation and cannot determine why this is happening.
Can anyone explain this?
Is there a setting that can allow the 2nd DAO call to return the database contents and not the previously returned object?
You are using Hibernate as ORM and this behavior is perfectly described in the Hibernate documentation. In the Transaction chapter:
Through Session, which is also a transaction-scoped cache, Hibernate provides repeatable reads for lookup by identifier and entity queries and not reporting queries that return scalar values.
Same goes for IBatis
MyBatis uses two caches: a local cache and a second level cache. Each
time a new session is created MyBatis creates a local cache and
attaches it to the session. Any query executed within the session will
be stored in the local cache so further executions of the same query
with the same input parameters will not hit the database. The local
cache is cleared upon update, commit, rollback and close.