I've been asked to write some coded tests for a hibernate-based data access object.
I figure that I'd start with a trivial test: when I save a model, it should be in the collection returned by dao.getTheList(). The problem is, no matter what, when I call dao.getTheList(), it is always an empty collection.
The application code is already working in production, so let's assume that the problem is just with my test code.
#Test
#Transactional("myTransactionManager")
public void trivialTest() throws Exception {
...
// create the model to insert
...
session.save(model);
session.flush();
final Collection<Model> actual = dao.getTheList();
assertEquals(1, actual.size());
}
The test output is expected:<1> but was:<0>
So far, I've tried explicitly committing after the insert, and disabling the cache, but that hasn't worked.
I'm not looking to become a master of Hibernate, and I haven't been given enough time to read the entire documentation. Without really knowing where to start, this seemed like this might be a good question for the community.
What can I do to make sure that my Hibernate insert is flushed/committed/de-cached/or whatever it is, before the verification step of the test executes?
[edit] Some additional info on what I've tried. I tried manually committing the transaction between the insert and the call to dao.getTheList(), but I just get the error Could not roll back Hibernate transaction; nested exception is org.hibernate.TransactionException: Transaction not successfully started
#Test
#Transactional("myTransactionManager")
public void trivialTest() throws Exception {
...
// create the model to insert
...
final Transaction firstTransaction = session.beginTransaction();
session.save(model);
session.flush();
firstTransaction.commit();
final Transaction secondTransaction = session.beginTransaction();
final Collection<SystemConfiguration> actual = dao.getTheList();
secondTransaction.commit();
assertEquals(1, actual.size());
}
I've also tried breaking taking the #Transactional annotation off the test thread and annotating each of 2 helper methods, one for each Hibernate job. For that, though I get the error: No Hibernate Session bound to thread, and configuration does not allow creation of non-transactional one here.
[/edit]
I think the underlying DBMS might hide the change to other transactions as long as the changing transaction is not completed yet. Is getTheList running in an extra transaction? Are you using oracle or postgres?
Related
In a recent task, after I created an object I flushed the result to the database. The database table had a unique constraint, meaning that if I tried to flush the same record for the second time, I would get a ConstraintViolationException. A sample snippet is shown below:
createEntityAndFlush(result);
sendAsyncRequestToThirdSystem(param);
The code for the createEntityAndFlush:
private T createEntityAndFlush(final T entity) throws ServiceException {
log.debug("Persisting {}", entity.getClass().getSimpleName());
getEntityManager().persist(entity);
getEntityManager().flush();
return entity;
}
The reason I used flush was that I wanted to make sure that a ConstraintViolationException would be thrown prior to finishing the transaction and thus calling the sendAsyncRequestToThirdSystem. But that was not the case, since sendAsyncRequestToThirdSystem was called after the exception was thrown.
To test the code in racing conditions, I used the ManagedExecutorService and created two runnable tasks (Future<?> submit(Runnable task)) to replicate the incoming request.
Eventually the problem was solved by trying performing a lock on a new table for each unique request id, but I would like to know where I was wrong in my first approach (ex. wrong use of flash, ManagedExecutorService was responsible for awkward behaviour). Thanks in advance!
The issue is that while flush() does flush the changes into the database, the transaction is still open, and the unique constraint will be checked when the transaction is committed (this may depend on the database, but at least with Postgres and any MVCC using DB).
So you will need to make sure that createEntityAndFlush(result); runs in its own transaction, possibly with a #Transactional(propagation = Propagation.REQUIRES_NEW) (or equivalent, if not using Spring) to see if the unique index is violated.
So I have this method:
#Transactional
public void savePostTitle(Long postId, String title) {
Post post = postRepository.findOne(postId);
post.setTitle(title);
}
As per this post:
The save method serves no purpose. Even if we remove it, Hibernate
will still issue the UPDATE statement since the entity is managed and
any state change is propagated as long as the currently running
EntityManager is open.
and indeed the update statement is issued, but if I run the method without the #Transactional annotation:
public void savePostTitle(Long postId, String title) {
Post post = postRepository.findOne(postId);
post.setTitle(title);
}
Hibernate will not issue the update statement so one has to call postRepository.save(post);explicitly.
What is the difference between using #Transactional or not in this specific scenario?
In a standard configuration, the scope of a persistence context is bound to the transaction.
If you don't have an explicit transaction defined by means of the annotation your (non-existing) transaction span just the reading call to the database.
After that the entity just loaded is not managed.
This means changes to it won't get tracked nor saved.
Flushing won't help because there are no changes tracked.
I am trying to perform batch inserts with data that is currently being inserted to DB one statement per transaction. Transaction code statement looks similar to below. Currently, addHolding() method is being called for each quote that comes in from an external feed, and each of these quote updates happens about 150 times per second.
public class HoldingServiceImpl {
#Autowired
private HoldingDAO holdingDao;
#Transactional(propagation = Propagation.REQUIRES_NEW, rollbackFor = Exception.class)
public void addHolding(Quote quote) {
Holding holding = transformQuote(quote);
holdingDao.addHolding(holding);
}
}
And DAO is getting current session from Hibernate SessionFactory and calling save on object.
public class HoldingDAOImpl {
#Autowired
private SessionFactory sessionFactory;
public void addHolding(Holding holding) {
sessionFactory.getCurrentSession().save(holding);
}
}
I have looked at Hibernate batching documentation, but it is not clear from document how I would organize code for batch inserting in this case, since I don't have the full list of data at hand, but rather am waiting for it to stream.
Does merely setting Hibernate batching properties in properties file (e.g. hibernate.jdbc.batch_size=20) "magically" batch insert these? Or will I need to, say, capture each quote update in a synchronized list, and then insert list load and clear list when batch size limit reached?
Also, the whole purpose of implementing batching is to see if performance improves. If there is better way to handle inserts in this scenario, let me know.
Setting the property hibernate.jdbc.batch_size=20 is an indication for the hibernate to Flush the objects after 20. In your case hibernate automatically calls sessionfactory.flush() after 20 records saved.
When u call a sessionFactory.save(), the insert command is only fired to in-memory hibernate cache. Only once the Flush is called hibernate synchronizes these changes with the Database. Hence setting hibernate batch size is enough to do batch inserts. Fine tune the Batch size according to your needs.
Also make sure your transactions are handled properly. If you commit a transaction also forces hibernate to flush the session.
Currently we are using play 1.2.5 with Java and MySQL. We have a simple JPA model (a Play entity extending Model class) we save to the database.
SimpleModel() test = new SimpleModel();
test.foo = "bar";
test.save();
At each web request we save multiple instances of the SimpleModel, for example:
JPAPlugin.startTx(false);
for (int i=0;i<5000;i++)
{
SimpleModel() test = new SimpleModel();
test.foo = "bar";
test.save();
}
JPAPlugin.closeTx(false);
We are using the JPAPlugin.startTx and closeTx to manually start and end the transaction.
Everything works fine if there is only one request executing the transaction.
What we noticed is that if a second request tries to execute the loop simultaneously, the second request gets a "Lock wait timeout exceeded; try restarting transaction javax.persistence.PersistenceException: org.hibernate.exception.GenericJDBCException: could not insert: [SimpleModel]" since the first request locks the table but is not done until the second request times out.
This results in multiple:
ERROR AssertionFailure:45 - an assertion failure occured (this may indicate a bug in Hibernate, but is more likely due to unsafe use of the session)
org.hibernate.AssertionFailure: null id in SimpleModel entry (don't flush the Session after an exception occurs)
Another disinfect is that the CPU usage during the inserts goes crazy.
To fix this, I'm thinking to create a transaction aware queue to insert the entities sequentially but this will result in huge inserting times.
What is the correct way to handle this situation?
JPAPlugin on Play Framwork 1.2.5 is not thread-safe and you will not resolve this using this version of Play.
That problem is fixed on Play 2.x, but if you can't migrate try to use hibernate directly.
You should not need to handle transactions yourself in this scenario.
Instead either put your inserts in a controller method or in an asynchronous job if the task is time consuming.
Jobs and controller both handle transasctions.
However check that this is really what you are trying to achieve. Each http request creating 5000 records does not seem realistic. Perhaps it would make more sense to have a container model with a collection?
Do you really need a transaction for the entire insert? Does it matter if the database is not locked during the data import?
You can simply create a job and execute it for each insert:
for (int i=0;i<5000;i++)
{
new Job() {
doJob(){
SimpleModel() test = new SimpleModel();
test.foo = "bar";
test.save();
}.now();
}
This will create a single transaction for each insert and get rid of your database lock issue.
My question is related to Transactions and Exceptions
Requirements:
I have 10 records to insert into database table. And after inserting every record, I insert data into another table. So if inserting to second table fails, I want to rollback that record.
Ex.
Say handle cash transfer (from one account to account) for 10 persons at a time.
pseudo code:
------------- Start of EJB method
for(int i = 0; i < TransferRecords.length; i++)
{
try
{
//Deduct cash from TransferRecord.accountFrom --- Includes use of Hibernate Session
//Add cash in TransferRecord.accountTo -- Includes use of Hibernate Session
} catch(AppException exception)
{
//Rollback the transaction only for this particular transfer (i)
// But here when I go for next record it says session is closed
}
}
---------End of EJB method
Here AppException is created with #ApplicaitonException(rollback=true) annotion.
The functionality we want is: Even if the transaction fails for TransferRecord (say 2), I want the data to be committed for record 0, record 1, record 3, record 4 (etc... and but not for record 2)
But the issue here is: when TransferRecord 2 fails and when I move to TransferRecord 3, I get "Session Closed" error.
My doubts are:
1. Is this a right approach? or should I run the for loop(for each TransferRecord) outside of the EJB
2. How can I make sure that session is not closed but transaction is rolled back (only for that for particular failed transaction)
Thank you in advance.
I am using EJB3, Hibernate 3.x, Jboss 4.2.x and I am using Container Managed Transaction.
Is this a right approach?
No, with CMT, you method is your transactional unit. So here, all your TransferRecord and handled in a same and unique transaction.
By the way, how do you rollback the transaction? Do you propagate a RuntimeException or do you call setRollbackOnly()? I'm just curious.
Or should I run the for loop (for each TransferRecord) outside of the EJB?
Why outside? Nothing forces you to do that. If you want to process each TransferRecord in its own transaction, you should pass them to another EJB method (the code below is inspired by this answer):
// supposing processRecords is defined on MyStatelessRemote1 and process defined on MyStatelessLocal1
#Stateless
#TransationAttribute(TransactionAttributeType.NOT_SUPPORTED)
public class MyStatelessBean1 implements MyStatelessLocal1, MyStatelessRemote1 {
#EJB
private MyStatelessLocal1 myBean;
public void processRecords(List<TransferRecord> objs) {
// No transactional stuff so no need for a transaction here
for(Object obj : objs) {
this.myBean.process(obj);
}
}
#TransationAttribute(TransactionAttributeType.REQUIRES_NEW)
public void process(TransferRecord transferRecord) {
// Transactional stuff performed in its own transaction
// ...
}
}
How can I make sure that session is not closed but transaction is rolled back (only for that for particular failed transaction)
I think I covered that part.
The only option you have here is either to use user transaction instead of container managed transaction of loop outside the bean so that everytime you enter the bean you get fresh entity manager with associated transaction and connection (basically session)
I think that you can create two separated transactions, the first for the TransferRecord(1) (doing a commit once everything is fine) and then starting other TX for all the TransferRecord(i+1).
Another approach is using savepoints, being able to rollback and discard everything past that savepoint (but I like prefer the first approach).
Regards.