I'm using ChainedTransactionManager in my project, but it's been marked #Deprecated recently. I would like to know if there is any analogue for a linked transaction from two databases.
JtaTransactionManager with Atomikos is not my case apparently.
Instead of using ChainedTransactionManager, you can register a TransactionSynchronization to explicitly follow transaction cleanup with simplified semantics in case of exceptions.
AbstractReactiveTransactionManager which implements
ReactiveTransactionManager, Serializable
This base class provides the following workflow handling:
determines if there is an existing transaction;
applies the appropriate propagation behavior;
suspends and resumes transactions if necessary;
checks the rollback-only flag on commit;
applies the appropriate modification on rollback
triggers registered synchronization callbacks.
Related
I have some service layer method with #Transactional(readOnly=true) and this method causes some RuntimeException quite often (let's say it is some NotFoundException exception).
I'm using ORM Hibernate also for the DB interaction process.
Is it legal pattern to do so?
What is the default behaviour in this case in sense of "roll-back" behaviour? Can it influence somehow badly on connections's state or lead to any problems?
It isn't something like "why not try it by your self?". I have a suspicion that this could lead to the Transaction rolled back because it has been marked as rollback-only error in the same method after some number of exceptions. This could be very specific JDBC PostgreSQL driver error. That is why I'm wondering about this design in general: is it something legal or illegal to do so?
So as far as I understand, you are worried about the roll-back. In this case a readOnly is a select statement and usually there is nothing to roll-back from a read. The only place where this is handy is when you read under a lock and when the transaction finishes you release that lock.
AFAIK readOnly will set the flushmode to FlushMode.NEVER and that is good and bad at the same time. Good, because there will no dirty checking, as described here. Bad because if you call a read/write transaction within a readOnly transaction, the transaction will silently fail to commit because the session is not flushed. This is easily testable btw - and I hope things have not changed since I've tried this.
Then there is the pool of connections. I know that C3P0's default policy is to rollback any uncommitted work. The flag to control this is autoCommitOnClose.
Then there is this link about readOnly and postgres - which I have not worked with and can't really tell my opinion on.
Now to your point about Transaction rolled back because it has been marked as rollback-only. For a readOnly transaction there might be nothing to roll-back as I said before, so this really depends on how you chain your #Transactional methods IMO.
I'm using spring-data-jpa interface CrudRepository to save large data sets in a database on a daily batch import.
#Bean
public ItemWriter<MyEntity> jpaItemWriter() {
RepositoryItemWriter<MyEntity> writer = new RepositoryItemWriter<>();
writer.setRepository(repository);
writer.setMethodName("save");
return writer;
}
The default implementation for this interface is SimpleJpaRepository, which offers a saveAndFlush() method. What is that for? Would this method be any help for me, eg regarding performance, if I run this method rather than save()?
One example would be if you were using Optimistic Locking and wanted to explicitly catch an OptimisticLockException and throw it back to the client. If the changes are only flushed to the database on transaction commit (i.e. when your transactional method returns) then you cannot do so. Explicity flushing from within your transactional method allows you to catch and rethrow/handle.
From the JPA Specification:
3.4.5 OptimisticLockException Provider implementations may defer writing to the database until the end of the transaction, when
consistent with the lock mode and flush mode settings in effect. In
this case, an optimistic lock check may not occur until commit time,
and the OptimisticLockException may be thrown in the "before
completion" phase of the commit. If the OptimisticLockException must
be caught or handled by the application, the flush method should be
used by the application to force the database writes to occur. This
will allow the application to catch and handle optimistic lock
exceptions
So, in answer to your question, it is not performance related but there may be cases when you want to explicitly flush to the database from within a transactional method.
According to Spring Data's Javadoc, saveAndFlush:
Saves an entity and flushes changes instantly.
if you using save method, it flushes changes when the underlying transaction commits.
I am working on a legacy application. We are moving it from JDBC to Spring 3.2 + Hibernate 4.1.12 + JTA 2 with declarative transactions. I see that the Container-Managed Transactions (CMT) are transacting and rolling back as one would expect. We are using Infinispan as the second level cache (2LC). There is one wrinkle...
There is a portion of the code with a different entry point that is run in a different thread and uses programmatic transactions or Bean-Managed Transactions (BMT). In the BMT path, I see that in the underlying service layer, which is using CMT, the transactions are joining with the BMT as one would hope and expect.
The persistence unit, data source, etc. are the same for both entry points. In both cases, the Hibernate autoflush code is aware that there is a transaction and flushes to the database driver. In the CMT entry point, the database driver holds the data until told to commit or rollback. In the BMT path, the data is pushed into the database on flush – the later commit or rollback has no effect or apparent meaning. The transaction manager is the JtaTransactionManager. The JtaTransactionManager is defined in a #Configuration class with #EnableTransactionManagement to enable the CMT rather than the <tx:annotation-driven/> element.
The singleton JtaTransactionManager bean is wired with the ajuna UserTransaction and TransactionManager via jtaPropertyManager.getJTAEnvironmentBean().getTransactionManager() and jtaPropertyManager.getJTAEnvironmentBean().getUserTransaction(). Both the UserTransaction and TransactionManager are prototype #Bean definitions.
I am able to confirm the data is in or not in the database by a query from another query tool to verify the behavior while debugging.
When I am unit testing, the data commits and rolls back as expected for both the BMT and the CMT entry point.
The BMT is managed by a class that has the transaction begin and end in different methods. It also has methods that perform the actual unit of work. The transactions for the BMT are initiated with the PlatformTransactionManager, not the TransactionTemplate. The class is driven by another class that has the logic to manage the logic flow. I know that the transactions are beginning and ending as expected. When reading various other discussion, It seems implied that the transactional control should be within a single method. I would agree that this would be preferred but is it essential?
If a CMT-managed servlet in Spring spawns a new Thread and starts the thread with a plan thread.start(), is it reasonable to expect that a BMT within that new Thread would be able to manage its transactions as described above?
The datasource is retrieved by JNDI. Using XA or non XA does not influence the outcome.
I am unable to post the code.
As a reference, here is the link to the Spring 3.1 docs on transaction in chapter 11.
Added 2013/10/04 - I see that Spring uses the JtaTransactionManagerBeanDefinitionParser to construct the desired JtaTransactionManager based on the perceived container. When this is used, the JTA transaction manager will set into itself in the afterPropertiesSet the UserTransaction, TransactionManager, and TransactionSynchronizationRegistry.
It appears that I do actually still leak data in the CMT but that it is hard to perceive/observe this without a debugger or forcing an error unnaturally since the transactions typically commit.
It appears that my issue is that I have partially bypassed the JCA such that the JCA is using a different TransactionManager.
Partial Answer - Because I have seen this transact properly in a mix of CMT and BMT, I know that it is possible to have the BMT transaction started in one method and committed in another.
The question remains: If a CMT-managed servlet in Spring spawns a new Thread and starts the thread with a plan thread.start(), is it reasonable to expect that a BMT within that new Thread would be able to manage its transactions as described above?
From JTA 1.1 Specification (http://download.oracle.com/otn-pub/jcp/jta-1.1-spec-oth-JSpec/jta-1_1-spec.pdf) section 3.1, it is clear that the transaction is bound to the thread. This is managed by the TransactionManager. One should be able to expect the thread to be able to perform actions within a transactional context if the thread is the one that created the transaction.
Note that the support of nested transactions is optional as cited in the same portion of the JTA specification.
The actual issue I was encountering was that the managed datasource was using a different instance of the transaction manager than we had as a bean in the application. Changing the application code to do a JNDI lookup of the container-provided TransactionManager allowed the managed datasource to participate in the same transaction as the application.
I have a DBManager singleton that ensures instantiation of a single EntityManagerFactory. I'm debating on the use of single or multiple EntityManager though, because a only single transaction is associated with an EntityManager.
I need to use multiple transactions. JPA doesn't support nested transactions.
So my question is: In most of your normal applications that use transactions in a single db environment, do you use a single EntityManager at all? So far I have been using multiple EntityManagers but would like to see if creating a single one could do the trick and also speed up a bit.
So I found the below helpful: Hope it helps someone else too.
http://en.wikibooks.org/wiki/Java_Persistence/Transactions#Nested_Transactions
Technically in JPA the EntityManager is in a transaction from the
point it is created. So begin is somewhat redundant. Until begin is
called, certain operations such as persist, merge, remove cannot be
called. Queries can still be performed, and objects that were queried
can be changed, although this is somewhat unspecified what will happen
to these changes in the JPA spec, normally they will be committed,
however it is best to call begin before making any changes to your
objects. Normally it is best to create a new EntityManager for each
transaction to avoid have stale objects remaining in the persistence
context, and to allow previously managed objects to garbage collect.
After a successful commit the EntityManager can continue to be used,
and all of the managed objects remain managed. However it is normally
best to close or clear the EntityManager to allow garbage collection
and avoid stale data. If the commit fails, then the managed objects
are considered detached, and the EntityManager is cleared. This means
that commit failures cannot be caught and retried, if a failure
occurs, the entire transaction must be performed again. The previously
managed object may also be left in an inconsistent state, meaning some
of the objects locking version may have been incremented. Commit will
also fail if the transaction has been marked for rollback. This can
occur either explicitly by calling setRollbackOnly or is required to
be set if any query or find operation fails. This can be an issue, as
some queries may fail, but may not be desired to cause the entire
transaction to be rolled back.
The rollback operation will rollback the database transaction only.
The managed objects in the persistence context will become detached
and the EntityManager is cleared. This means any object previously
read, should no longer be used, and is no longer part of the
persistence context. The changes made to the objects will be left as
is, the object changes will not be reverted.
EntityManagers by definition are not thread safe. So unless your application is single threaded, using a single EM is probably not the way to go.
in EJB 3.x for both the onMessage() method of MDBs and the #Timeout method of SLSBs and MDBs there is no transaction propagation. That is, there is no client for the execution of the method, so a transaction can't be possibly propagated.
When using Container-managed transactions, I would expect the two cases to accept the same javax.ejb.TransactionAttributeType. However, they don't.
For the onMessage() method, REQUIRED and NOT_SUPPORTED are the acceptable transaction attributes, whereas for #Timeout methods REQUIRED, REQUIRES_NEW and NOT_SUPPORTED.
In particular, for the #Timeout methods the spec says (par. 18.2.8):
Note that the container must start a
new transaction if the REQUIRED
(Required) transaction attribute is
used. This transaction attribute value
is allowed so that specification of a
transaction attribute for the timeout
callback method can be defaulted.
If I get this correctly, normally REQUIRES_NEW should be used here, but because REQUIRED is the default for an EJB, it is also allowed for #Timeout methods, giving it the same semantic as REQUIRES_NEW, since there is no possibility of a transaction to be propagated.
Questions:
Is my understanding correct?
Why isn't REQUIRES_NEW acceptable also in onMessage()? Is it different somehow in respect of transactions?
UPDATE:
The same goes for other cases where REQUIRES_NEW is supported: #Asynchronous and #PostConstruct/#PreDestroy methods.
Yes, your understanding is correct.
In my opinion, #Timeout is odd for specifying REQUIRES_NEW. The spec basically requires that the container update the persistent timer database within the same transaction as the timeout method. This isn't really any different than transactional JCA message delivery, except that it's more apparent in the JCA scenario that an external component is handling the transaction. I suppose you could argue that there is no JavaEE component driving the #Timeout method, but in my opinion, it would have been better to disallow REQUIRES_NEW for both. Regardless, the inconsistency is odd, so perhaps MDB will be updated in a later version of the spec to allow REQUIRES_NEW.