OpenJPA Transactions - Single or Multiple Entity managers? - java

I have a DBManager singleton that ensures instantiation of a single EntityManagerFactory. I'm debating on the use of single or multiple EntityManager though, because a only single transaction is associated with an EntityManager.
I need to use multiple transactions. JPA doesn't support nested transactions.
So my question is: In most of your normal applications that use transactions in a single db environment, do you use a single EntityManager at all? So far I have been using multiple EntityManagers but would like to see if creating a single one could do the trick and also speed up a bit.
So I found the below helpful: Hope it helps someone else too.
http://en.wikibooks.org/wiki/Java_Persistence/Transactions#Nested_Transactions
Technically in JPA the EntityManager is in a transaction from the
point it is created. So begin is somewhat redundant. Until begin is
called, certain operations such as persist, merge, remove cannot be
called. Queries can still be performed, and objects that were queried
can be changed, although this is somewhat unspecified what will happen
to these changes in the JPA spec, normally they will be committed,
however it is best to call begin before making any changes to your
objects. Normally it is best to create a new EntityManager for each
transaction to avoid have stale objects remaining in the persistence
context, and to allow previously managed objects to garbage collect.
After a successful commit the EntityManager can continue to be used,
and all of the managed objects remain managed. However it is normally
best to close or clear the EntityManager to allow garbage collection
and avoid stale data. If the commit fails, then the managed objects
are considered detached, and the EntityManager is cleared. This means
that commit failures cannot be caught and retried, if a failure
occurs, the entire transaction must be performed again. The previously
managed object may also be left in an inconsistent state, meaning some
of the objects locking version may have been incremented. Commit will
also fail if the transaction has been marked for rollback. This can
occur either explicitly by calling setRollbackOnly or is required to
be set if any query or find operation fails. This can be an issue, as
some queries may fail, but may not be desired to cause the entire
transaction to be rolled back.
The rollback operation will rollback the database transaction only.
The managed objects in the persistence context will become detached
and the EntityManager is cleared. This means any object previously
read, should no longer be used, and is no longer part of the
persistence context. The changes made to the objects will be left as
is, the object changes will not be reverted.

EntityManagers by definition are not thread safe. So unless your application is single threaded, using a single EM is probably not the way to go.

Related

How does default #Transactional work in the low level?

Does a propagation-Required default #Transactional collects all queries and executes them at the end of the method altogether or does it open a db transaction and executes BEGIN, every query as it finds it and when transaction finishes executes COMMIT?
Is this what is referred as Logical vs Physical transactions?
I am wondering that because I am using a #Transactional tests that executes GET endpoint + DELETE endpoont + GET endpoint with READ_UNCOMMITED, behavior manages to work well, but I see no trace of delete queries in the logs, only selects.
I would have expected I see all the queries issued and then a rollback, but I have the feeling that the transaction is just modifying the managed entities of the persistance context and just tries to save by the end of the test...
If I should be seeing all the delete queries as the repository.removes() are executed then it might be that for some reason hibernate is only logging queries out of a readonly=false transaction
Maybe this answer helps you: JPA flush vs commit
If there is an active transaction, JPA/Hibernate will execute the flush method when transaction is committed. Meanwhile, all changes applied to entities are collected in the Unit of Work.
In flush() the changes to the data are reflected in database after encountering flush, but it is still in transaction.flush() MUST be enclosed in a transaction context and you don't have to do it explicitly unless needed (in rare cases), when EntityTransaction.commit() does that for you.
You can change this behavior changing the flush strategy.

Multiple transactions in a single hibernate session (with Spring)

Is it possible to model the following using Hibernate + Spring.
Open session
Begin transaction
Do some work
Commit
Begin transaction
More work
Commit
Close session
I use the Spring TransactionTemplate which does both session + transaction lifetime scoping.
The reason is that sometimes I have a few stages in a business process and I would like to commit after each stage completes. However I would like to continue using the same persistent objects. If I have a separate session per transaction then I get transient/detached exceptions because the original session was closed.
Is this possible?
Yes, Hibernate's Sessions can begin and commit several transactions. What you need to do is to store open session somewhere, then reuse it. Note, that Session is not a thread-safe object, but if you're sure it won't have problems with concurrency, what you need is just to use TransactionSynchronizationUtils to bind a session to the thread resources and then unbind it when desired, you can find an example here or you can take a look at OSIV and its standard implementations.
This is a very complicated thing, it's much easier and thus desirable that you close your session right away and don't reuse it, because it may bring troubles:
The objects inside of cache are not automatically evicted, thus your Session will grow in size until OutOfMemory.
The objects inside of session are not flushed unless they are dirty, thus the chance that object was changed by another user is larger and larger. Ensure that only a single user is going to change writable objects.
If some exception happens during one of steps, you have to ensure you close the session. After exception occurred inside of Session, this object is not reusable.
If transaction was rolled back, the session is cleared by Spring, thus all your objects become detached. Make sure your discard everything if at least one of transactions was rolled back.
You could achieve this using the OpenSessionInView pattern. Spring provides a javax.servlet.Filter implementation which you could use if you're working in a servlet environment (question doesn't say so). This will ensure that your Hibernate session is kept open for the duration of the request rather than just for an individual transaction.
The Javadoc on this class is pretty comprehensive and might be a good starting point.

Ehcache getting out of sync with database

On a Java EE server using CMT, I am using ehcache to implement a caching layer between the business object layer (EJBs) and the Data Access layer (POJOs using JDBC). I seem to be experiencing a race condition between two threads accessing the same record while using a self-populating Ehcache. The cache is keyed on the primary key of the record.
The scenario is:
The first thread updates the record in the database and removes the record from cache (but the database commit doesn't necessarily happen immediately - there may be other queries to follow.)
The second thread reads the record, causing the cache to be re-populated.
The first thread commits transaction.
This is all happening in a fraction of a second. It results in the cache being out of sync with the database, and subsequent reads of the record returning the stale cached data until another update is performed, or the entry expires from the cache. I can handle stale data for short periods (the typical length of a transaction), but not minutes, which is how long I would like to cache objects.
Any suggestions for avoiding this race condition?
UPDATE:
Clearing the cache after the transaction has committed would certainly be ideal. The question is, in a J2EE environment using CMT, when the caching layer is sandwiched between the business layer (stateless session EJBs) and the data access layer, how to do this?
To be clear about the constraints this imposes, the method call in question may or may not be in the same transaction as additional method calls that happen before or after. I can't force a commit (or do this work in a separate transaction) since that would change the transaction boundaries from what the client code expects. Any subsequent exceptions would not roll back the entire transaction (unneseccarily clearing the cache in this case is an acceptable side-effect). I can't control the entry points into the transaction, as it is essentially an API that clients can use. It is not reasonable to push the resonsiblity of clearing the cache to the client application.
I would like to be able to defer any cache clearing operations until the entire transaction is committed by the EJB container, but I have found no way to hook into that logic and run my own code with a stateless session bean.
UPDATE #2:
The most promising solution so far, short of a major design change, is to use ehcache 2.0's JTA support: http://ehcache.org/documentation/apis/jta
This means upgrading to ehcache 2.x and enabling XA transactions for the database as well, which could potentially have negative side-effects. But it seems like the "right" way.
You are using transactions - it makes more sense to remove the cache after the commit, that is when the change really happens.
That way you see the old data only during the length of the transaction, and all reads afterwards have the latest view.
Update: Since this is CMT specific, you should look at the SessionSynchronization interface, and it's afterCompletion() method. This is showed in this tutorial.

Where does the responsibility lie to ensure the ACID properties of a transaction?

I was going through ACID properties regarding Transaction and encountered the statement below across the different sites
ACID is the acronym for the four properties guaranteed by transactions: atomicity, consistency, isolation, and durability.
**My question is specifically about the phrase.
guaranteed by transactions
**. As per my experience these properties are not taken care by
transaction automatically. But as a java developer we need to ensure that these properties criteria are met.
Let's go through for each property:-
Atomicity:- Assume when we create the customer the account should be created too as it is compulsory. So now during transaction
the customer gets created while during account creation some exception oocurs. So the developer can now go two ways: either he rolls back the
complete transaction (atomicity is met in this case) or he commits the transaction so customer will be created but not the
account (which violates the atomicity). So responsibility lies with developer?
Consistency:- Same reason holds valid for consistency too
Isolation :- as per definition isolation makes a transaction execute without interference from another process or transactions.
But this is achieved when we set the isolation level as Serializable. Otherwis in another case like read commited or read uncommited
changes are visible to other transactions. So responsibility lies with the developer to make it really isolated with Serializable?
Durability:- If we commit the transaction, then even if the application crashes, it should be committed on restart of application. Not sure if it needs to be taken care by developer or by database vendor/transaction?
So as per my understanding these ACID properties are not guaranteed automatically; rather we as a developer sjould achieve them. Please let me know
if above understanding regarding each point is correct? Would appreciate if you folks can reply for each point(yes/no will also do.
As per my understanding read committed should be most logical isolation level in most application, though it depends on requirement too.
The transactions guarantees ACID more or less:
1) Atomicity. Transaction guarantees all changes are made or none of them. But you need to manually set the start and end of a transaction and manually perform commit or rollback. Depending on the technology you use (EJB...), transactions are container-managed, setting the start and end to the whole "method" you are creating. You can control by configuration if a method invoked requires a new transaction or an existing one, no transaction...
2) Consistency. Guaranteed by atomicity.
3) Isolation. You must define the isolation level your application needs. Default value is defined depending upon the database, container... The commonest one is READ COMMITTED. Be careful with locks as can cause dead-lock depending on your logic and isolation level.
4) Durability. Managed entirely by the database. If your commit executes without error, nearly all database guarantees durability of changes, but some scenarios can cause to not guarantee that (writes to disk are cached in memory and flushed later...)
In general, you should be aware of transactions and configure it in the container of declare by code the star and end (commit, rollback).
Database transactions are atomic: They either happen in their entirety or not at all. By itself, this says nothing about the atomicity of business transactions. There are various strategies to map business transactions to database transactions. In the simplest case, a business transaction is implemented by one database transaction (where a business transaction is aborted by rolling back the database one). Then, atomicity of database transactions implies atomicity of business transactions. However, things get tricky once business transactions span several database transactions ...
See above.
Your statement is correct. Often, the weaker guarantees are sufficient to prove correctness.
Database transactions are durable (unless there is a hardware failure): if the transaction has committed, its effect will persist until other transactions change the data. However, calling code might not learn whether a transaction has comitted if the database or the network between database and calling code fails. Therefore
If we commit the transaction, then even if application crash, it should be committed on restart of application.
is wrong. If the transaction has committed, there is nothing left to do.
To summarize, the database does give strong guarantees - about the behaviour of the database. Obviously, it can not give guarantees about the behaviour of the entire application.

Synchronizing spring transaction

We use Spring and Hibernate in our project and has a layered Architechture. Controller -> Service -> Manager -> Dao. Transactions start in the Manager layer. A method in the service layer which updates an object in the db is called by many threads and this is causing to throw a stale object expection. So I made this method Synchronized and still see the stale object exception thrown. What am I doing wrong here? Any better way to handle this case?
Thanks for the help in advance.
The stale object exception is thrown when an entity has been modified between the time it was read and the time it's updated. This can happen inside a single transaction, but may also happen when you read an object in a transaction, modify it (in the controller layer, for example), then start another transaction and merge/update it (in this case, minutes or hours can separate the read and the update).
The exception is thrown to help you avoid conflicts between users.
If you don't care about conflicts (i.e. the last update always wins and replaces what the previous ones have written), then don't use optimistic locking. If you're concerned about conflicts, then StaleObjectExceptions will happen, and you should popup a meaningful message to the end user, asking him to reload the data and try to modify it again. There's no way to avoid them. You must just be optimistic and hope that they won't happen often.
Note that your synchronized trick will work only if
the exception happens only when reading and writing in the same transaction
updates to the entity are only made by this service
your application is not clustered.
It might also reduce the throughput dramatically, because you forbid any concurrent updates, regardless of which entities are updated by the concurrent transactions. It's like if you locked the whole table for the duration of the whole transaction.
My guess is that you would need to configure optimistic locking on the Hibernate side.

Categories