manual commits with #Transactional attribute - java

I use spring+hibernate template to process entities. There is rather big amount of entities to be loaded all at once, so I retrieve an iterator from hibernate template.
Every entity should be processed as a single unit of work. I tried to put entity processing in a separate transaction (propagation = REQUIRED_NEW). However I ended with exception stating that proxy is bounded to two sessions. This is due to suspended transaction used for iterator lazy loading. I am using the same bean for lazy loading and for processing entities. (May be it should be refactored into two separate daos: one for lazy loading and one for processing?)
Then I tried to use single transaction that is committed after each entity is processed. Much better here, there are no exceptions during entity processing, but after processing is finished and method returns, exception is thrown from spring transaction managing code.
#Transactional
public void processManyManyEntities() {
org.hibernate.Sesstion hibernateSession = myDao.getHibernateTemplate().getSessionFactory().getCurrentSession();
Iterator<Entity> entities = myDao.findEntitesForProcessing();
while (entities.hasNext()) {
Entity entity = entities.next();
hibernateSession.beginTransaction();
anotherBean.processSingleEntity(entity);
hibernateSession.getTransaction().commit();
}
}
processSingleEntity is a method in another bean annotated with #Transactional, so there is one transaction for all entities. I checked what transaction causes an exception: it is the very first transaction returned from hibernateSession.beginTransaction(), so it just not updated in transaction manager.
There are several questions:
is it possible to avoid session bind exception without refactoring the dao? that is not relevant question as problem is with Session and hibernate not with dao
is it possible to update transaction in the transaction manager? solved
is it possible to use the same transaction (for anotherBean.processSingleEntity(entity);) without #Transactional annotation on processManyManyEntities?

I would prefer to remove the #Transactional annotation on "processManyManyEntities()", and, to eagerly load all data from "findEntitesForProcessing".
If your want each entity of data to be transactional in "processSingleEntity", the #Transactinoal on "processSingleEntity" is fine. You don't have to annotate the #Transactional on "processManyManyEntities()". But in case of lazy loading, eager loading is a necessary mean to prevent the source data from loading in another session, say, in "processSingleEntity".
Since the transaction for each entity of data, the transactional boundary in loading of source data is not the case of your transaction. Don't let the "lazy loading" to complicate your intent of transaction for modification of data.

That is possible to update transaction in transaction manager. All you need to do is get SessionHolder instance from TransactionSynchronizationManager.getResource(sessionFactory) and set transaction in the session holder to the current. That makes possible to commit transaction whenever it is necessary and allows transaction to be committed by spring transaction manager after method return.

Related

How do we set database session variables in Hibernate?

When using Hibernate and JPA, I have an existing DAO Abstract Class that sets up an entity manager this way:
#PersistenceContext(unitName = "<name>")
private EntityManager entityManager;
And in certain methods, it's used in the following manner:
public ObjectType findByPrimaryKey(int id) {
return entityManager.find(ObjectType, id);
}
I wanted to set a database configuration parameter in the same transaction as the "find" query. However, I can't seem to find the internal transaction that entityManager uses. I wrote an Aspect that checks for Transactional annotation and sets the variable in there, and added #Transactional on top of findByPrimaryKey method, but that still didn't get set in the session.
Is there something incorrect here or another way to do it? Ideally, want to set a special variable before every query.
Final solution was to combine Spring's "#Transactional" annotation, which automatically opens a transaction, before calling my Data access layer, with an Aspect that pointcuts to "#Transactional" annotation and runs queries within a transaction. Just executing any query within an "#Transactional" method will also work, before calling the Hibernate data access layer.
Without the "#Transactional" annotation, I couldn't control Hibernate's transaction management.

How synchronization works?

For example I have #Stateless java bean:
#Stateless(mappedName = "test")
public class Test implements ITest
{
#Override
public void updateActivity
(SomeObj activity)
throws Exception
{
em.persist(activity);
}
}
Because it's a container-managed bean, then tell me, when does the container decide to synchronize the context with a DB? In this case I immediately see the results in the DB, but sometimes they do not seem to immediately appear there, right?
Please explain me How the synchronization works with the context and the DB at Container-Managed mode? When does the container decide to synchronize the context with a DB?
This will be driven by the transaction propagation configuration as your EJB bean might be one of many managed beans participating in a single transaction. This gets more complex if there are multiple transaction sources in flight e.g. XA 2PC. Generally, the changes will be flushed into the database on transaction commit however this further depends on the transaction isolation level or the presence of savepoints when nested transactions are used.
Check the #TransactionAttribute annotation docs or look for a tutorial that explains transaction propagation.

How spring data clean persited entities in transactional method?

I need to receive and save huge amount of data using spring data over hibernate. Our server allocated not enough RAM for persisting all entities at the same time. We will definitely get OutOfMemory error.
So we need to save data by batches it's obvious. Also we need to use #Transactional to be sure that all data persisted or non was persisted in case of even single error.
So, the question: does spring data during #Transactional method keep storing entities in RAM or entities which were flushed are accessible to garbage collector?
So, what is the best approach to process huge mount of data with spring data? Maybe spring data isn't right approach to solve problems like that.
Does spring data during #Transactional method keep storing entities in
RAM or entities which were flushed are accessible to garbage
collector?
The entities will keep storing in RAM (i.e in entityManager) until the transaction commit/rollback or the entityManager is cleared. That means the entities are only eligible for GC if the transaction commit/rollback or
entityManager.clear() is called.
So, what is the best approach to process huge mount of data with
spring data?
The general strategy to prevent OOM is to load and process the data batch by batch . At the end of each batch , you should flush and clear the entityManager such that the entityManager can release its managed entities for CG. The general code flow should be something like this:
#Component
public class BatchProcessor {
//Spring will ensure this entityManager is the same as the one that start transaction due to #Transactional
#PersistenceContext
private EntityManager em;
#Autowired
private FooRepository fooRepository;
#Transactional
public void startProcess(){
processBatch(1,100);
processBatch(101,200);
processBatch(201,300);
//blablabla
}
private void processBatch(int fromFooId , int toFooId){
List<Foo> foos = fooRepository.findFooIdBetween(fromFooId, toFooId);
for(Foo foo :foos){
//process a foo
}
/*****************************
The reason to flush is send the update SQL to DB .
Otherwise ,the update will lost if we clear the entity manager
afterward.
******************************/
em.flush();
em.clear();
}
}
Note that this practise is only for preventing OOM but not for achieving high performance. So if performance is not your concern , you can safely use this strategy.

Begin and Rollback transactions with JdbcTemplate

i'm working with spring jdbcTemplate in some desktop aplications.
i'm trying to rollback some database operations, but i don't know how can i manage the transaction with this object(JdbcTemplate). I'm doing multiple inserts and updates through a methods sequence. When any operation fails i need rollback all previous operations.
any idea?
Updated... i tried to use #Transactional, but the rolling back doesn't happend.
Do i need some previous configuration on my JdbcTemplate?
My Example:
#Transactional(rollingbackFor = Exception.class,propagation = Propagation.REQUIRES_NEW)
public void Testing(){
jdbcTemplate.exec("Insert into table Acess_Level(IdLevel,Description) values(1,'Admin')");
jdbcTemplate.exec("Insert into table Acess_Level(IdLevel,Description) values(STRING,'Admin')");
}
The IdLevel is a NUMERIC parameter, so... in our second command will occur an exception. When i see the table in database, i can see the first insert... but, i think this operation should be roll back.
what is wrong?
JdbcTemplate doesn't handle transaction by itself. You should use a TransactionTemplate, or #Transactional annotations : with this, you can then group operations within a transaction, and rollback all operations in case of errors.
#Transactional
public void someMethod() {
jdbcTemplate.update(..)
}
In Spring private methods don't get proxied, so the annotation will not work. See this question: Does Spring #Transactional attribute work on a private method?.
Create a service and put #Transactional on the public methods that you want to be transactional. It would be more normal-looking Spring code to have simple DAO objects that each do one job and have several of them injected than it would to have a complicated DAO object that performed multiple SQL calls within its own transaction.

Why do we have to manually flush() the EntityManager in a extended PersistenceContext?

In our J2EE application, we use a EJB-3 stateful bean to allow the front code to create, modify and save persistent entities (managed through JPA-2).
It looks something like this:
#LocalBean
#Stateful
#TransactionAttribute(TransactionAttributeType.NEVER)
public class MyEntityController implements Serializable
{
#PersistenceContext(type = PersistenceContextType.EXTENDED)
private EntityManager em;
private MyEntity current;
public void create()
{
this.current = new MyEntity();
em.persist(this.current);
}
public void load(Long id)
{
this.current = em.find(MyEntity.class, id);
}
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public void save()
{
em.flush();
}
}
Very important, to avoid too early commits, only the save() method is within a transaction, so if we call create(), we insert nothing in the database.
Curiously, in the save() method, we have to call em.flush() in order to really hit the database. In fact, I tried and found that we can also call em.isOpen() or em.getFlushMode(), well anything that is "em-related".
I don't understand this point. As save() is in a transaction, I thought that at the end of the method, the transaction will be committed, and so the persistent entity manager automatically flushed. Why do I have to manually flush it?
Thanks,
Xavier
To be direct and to the metal, there will be no javax.transaction.Synchronization objects registered for the EntityManager in question until you actually use it in a transaction.
We in app-server-land will create one of these objects to do the flush() and register it with the javax.transaction.TransactionSynchronizationRegistry or javax.transaction.Transaction. This can't be done unless there is an active transaction.
That's the long and short of it.
Yes, an app server could very well keep a list of resources it gave the stateful bean and auto-enroll them in every transaction that stateful bean might start or participate in. The downside of that is you completely lose the ability to decide which things go in which transactions. Maybe you have a 2 or 3 different transactions to run on different persistence units and are aggregating the work up in your Extended persistence context for a very specific transaction. It's really a design issue and the app server should leave such decisions to the app itself.
You use it in a transaction and we'll enroll it in the transaction. That's the basic contract.
Side note, depending on how the underlying EntityManager is handled, any persistent call to the EntityManager may be enough to cause a complete flush at the end of the transaction. Certainly, flush() is the most direct and clear but a persist() or even a find() might do it.
If you use extended persistence context all operations on managed entities done inside non-transactional methods are queued to be written to the database. Once you call flush() on entity manager within a transaction context all queued changes are written to the database. So in other words, the fact that you have a transactional method doesn't commit the changes itself when method exits (as in CMT), but flushing entity manager actually does. You can find full explanation of this process here
Because there is no way to know "when" the client is done with the session (extended scope).

Categories