EJB 3.0 Transaction propagation - java

With respect to EJB 3.0 transaction propagation, I have the following basic question.
This is my scenario : EJB Service -> POJO -> EJB Dao. I need to stick to this architecture due to some constraints within the organization.
So, in EJB Service Tx starts, I direct to a POJO which returns the local EJB DAO. Now within the methods of the EJB DAO, I inject the persistence context and the Entity Manager and the methods have been annotated with TransactionAttribute (Required). So my question is if within the DAO EJB will the transaction context of the Service EJB be used or will it start a new transaction due to the POJO layer in between.
Any help would be appreciated.
Thanks..Vijay

Since a transaction is started from the "EJB Service", it will be propagated to the "EJB DAO". The transaction is set as a kind of thread local (at least conceptually, I don't know how implementations do it). That is unless the POJO does anything like running the DAO in a newlly created thread (which -for manually created threads- is inappropriate for Java EE anyway).

Related

JTA vs Local transactions

Are local transactions and BMT same?
Do we need transactionManager for local transactions?
I read transactionManager will be ineffective for local transactions. is that correct?
JTA give provision for both CMT and BMT?
There is no difference in local or global transaction handling concerning BMT or CMT.
BMT and CMT only define how the start and end of transactions may be defined. In CMT it is defined by the calls of annotated methods, in BMT the start and end of transactions is defined using the UserTransaction-object.
If a global transaction is necessary or better a distributed transaction, then the Transactionmanager will arrange that, independent on BMT or CMT.
These global transactions or two phase commit become necessary as soon as more then one transaction resource is involved. For instance if you use a MessageDriven bean which calls a bean annotated as "Beanmanaged transaction handling" and doing changes in a DBMS. In that case the two-phase commit is done for both resources the message queue and the DBMS.
So, to answer your questions:
No, answer see above.
You can't do transactions with more than one resource without transactionmanager. A container providing distributed transactions as J2EE-Containers normally do, will handle all transactions using a Transactionmanager. At jboss you can configure datasources as "no jta" in that case you explicitly exempt them from two-phase commits, but I think the jboss-transactionmanager will handle the db-connections of such a datasource in spite of that.
Yes, if by "give provision" you mean "supports"
Below are few points for your question in order.
Global transaction support is available to Web and enterprise bean J2EE components and, with some limitations, to application client
components.
Enterprise bean components can be subdivided into two categories: beans that use container-managed transactions (CMT),
and those that use bean-managed transactions (BMT).
A local transaction containment (LTC) is used to define the
application server behavior in an unspecified transaction context.
An LTC is a bounded unit-of-work scope, within which zero, one, or
more resource manager local transactions (RMLT) can be accessed. The
LTC defines the boundary at which all RMLTs must be complete; any
incomplete RMLTs are resolved, according to policy, by the
container. An LTC is local to a bean instance; it is not shared
across beans, even if those beans are managed by the same container.
LTCs are started by the container before dispatching a method on a
J2EE component (such as an enterprise bean or servlet) whenever the
dispatch occurs in the absence of a global transaction context. LTCs
are completed by the container depending on the
application-configured LTC boundary, for example, at the end of the
method dispatch. There is no programmatic interface to the LTC
support; LTCs are managed exclusively by the container and
configured by the application deployer through transaction
attributes in the application deployment descriptor.
A local transaction containment cannot exist concurrently with a
global transaction. If application component dispatch occurs in the
absence of a global transaction, the container always establishes an
LTC for J2EE components at J2EE 1.3 or later.
If an application uses two or more resources, an external transaction manager is needed to coordinate the updates to all resource managers in a global transaction.
for more info. on Transaction Managers : https://access.redhat.com/documentation/en-US/Fuse_ESB_Enterprise/7.1/html/EIP_Transaction_Guide/files/TxnManagers-WhatIs.html
BMT enterprise beans, application client components, and Web components can use the Java Transaction API (JTA) UserTransaction
interface to define the demarcation of a global transaction. To
obtain the UserTransaction interface, use a Java Naming and
Directory Interface (JNDI) lookup of java:comp/UserTransaction, or
use the getUserTransaction method from the SessionContext object.
The UserTransaction interface is not available to CMT enterprise
beans.If CMT enterprise beans attempt to obtain this interface, an
exception is thrown, in accordance with the Enterprise JavaBeans
(EJB) specification.
A Web component or enterprise bean (CMT or BMT) can get the
ExtendedJTATransaction interface through a lookup of
java:comp/websphere/ExtendedJTATransaction. This interface
provides access to the transaction identity and a mechanism to
receive notification of transaction completion.

How to set-up transactions for both the web application and batch jobs using Spring and Hibernate

I have an application which uses Spring 4.3 and Hibernate 5.3.
There's a web application with a presentation layer, a servive layer and a DAO layer, as well as some jobs sharing the same service and DAO layers.
Transactions are initialized in different layers with #Transactional annotations.
It led me to a problem I described here: Controlling inner transaction settings from outer transaction with Spring 4.3
I read a bit about how to set-up transactions to wire Spring and Hibernate together. It looks the recommended approach is to initialize transactions in the service layer.
What I don't like is that most transactions exist only because they are required for hibernate to work properly.
And when I really need a transaction for a job calling multiple service methods, it seems I don't have a choice to keep initializing transactions from the jobs. So moving #Transactional annotations from DAO to service doesn't seem to make any difference.
How would you recommend to set-up transactions for this kind of application?
Pardon me for replying in answer as I am not able to comment
I don't get the meaning of you having to keep initializing transactions from the jobs?
Usually for
DAO class, it should be annotated with #Repository.
Service class with #Service and #Transactional
Webservice, if u have, with #RestController, #RequestMapping, #Transactional.
By doing so, any call from service class will be 1 transaction thus if Service class A calls service B and C, even if service class C throws error, the whole transactions will be rollback.

Genetrate "Event Scoped" beans out off the application scope

I am a newby with CDI and EJB and I've just created a jboss web application. Though, additionally, I also wanted this app to process rabbitmq messages. When processing these, I would like to do some persistence work, though, as I've been listening for rabbitmq messages from an application scoped bean that is started with the #Startup annotation, I've not been able to commit any transaction within this kind of scope, that is, as I am departing from the application scope, every bean that I will instatiate from this scope will be application scoped. When I try to perform em.getTransaction() and em.commit() the code blows up complaining that I cannot invoke getTransaction() under JTA transactions, and when I use User transactions, every operation seems to be put onto the same transaction until it finally is rolled back, or there errors complaining that there is a already a transaction underway...
CDI beans do not support transactions out of the box like EJBs do. So your options are to either:
Upon receiving RabbitMQ messages, call some EJBs (directly or through observers) that will do the persistence work.
Add transactions support to your existing CDI beans using one of the following - Apache DeltaSpike or Seam Persistence.
It is indeed quite hard to give you more details based on the info you provided. However, on the conceptual level, one of the approaches above would do the trick.
Also, the notion of event scope seems confusing. I would say you don't need it. One of the approaches above will do. Also, take a look at CDI events.

Application vs Container Managed EntityManager

I am currently having a problem with understanding a concept of JPA.
I am currently using/developing recent EclipseLink, Glassfish, Derby database to demonstrate a project.
Before I develop something in much bigger picture, I need to be absolutely sure of how this PersistingUnit work in terms of different scopes.
I have bunch of servlets 3.0 and currently saving user's associated entity classes in the request.session object (everything in the same war file). I am currently using Application-managed EntityManager using EntityManagerFactory and UserTransaction injection. It works smooth when it is tested by myself. The different versions of entities occur when 2 people accessing the same entities at the same time. I want to work with managed beans cross the same WAR, same persistence unit if possible.
I have read http://docs.oracle.com/javaee/6/tutorial/doc/bnbqw.html and bunch of explanations of those scopes which don't make sense at all for me.
Long story short, what are the usage and difference of app and container managed EntityManagers?
When you say application managed transaction it means its your code which is supposed to handle the transaction. In a nutshell it means:
You call:
entityManager.getTransaction().begin(); //to start a transaction
then if success you will ensure to call
entityManager.getTranasaction().commit(); //to commit changes to database
or in case of failure you will make sure to call:
entityManager.getTransaction().rollBack();
Now imagine you have a container, which knows when to call begin(), commit() or rollback(), thats container managed transaction. Someone taking care of transaction on your behalf.
You just need to specify that.
Container managed transaction(CMT) could be regarded as a kind of declarative transaction, in which case, transaction management is delegated to container (normally EJB container), and much development work could be simplified.
If we are in a Java EE environment with an EJB container, we could use CMT directly.
If we are in a Java SE environment, or a Java EE environment without an EJB container, we could still take advantage of CMT, one way is to use Spring, which uses AOP to implement declarative transaction management; Another way is to use Guice, which uses a PersistFilter to implement declarative transaction.
In CMT, a container (whatever an EJB container, Spring or Guice) will take care of the transaction propagation and commit/rollback stuff;
Application managed transaction (AMT) differs from CMT in that we need to handle transactions programmatically in our code.

EJB and Hibernate in Struts application

I have an application that has a struts 1.1 and EJB 2 combination, but now we are introducing a new piece into it with hibernate 3.2. The hibernate DAO's run in parallel with the EJB 2 session bean DAO's with pure JDBC. I am concerned about the jdbc connections management in this case. Since EJB 2.0 has container managed connections and transactions. But in the case of hibernate we begin and commit a hibernate transaction, Will it be safe to assume there will not be any issues with this architecture.
Need some analysis help.
PM
I was contemplating on the same issue, if hibernate module which might access existing tables being used by JDBC DAO's whose transaction is managed by Session Beans. But here is my approach:
I will have a delegate that invokes the EJB session bean, and since this bean will be responsible to manage transactions, I will create my hibernate DAO's and invoke them from this session bean, which I assume will not have any issues.
The hibernate session factory for this application will be instantiated once using the hibernate plugin that will be part of the struts config xml and will be saved as part of the servlet context and then the action class will pass this sessionfactory instance from the EJB session bean delegate to the hibernate DAO.
I guess this will be a clean approach, since the transaction will be managed by the EJB Session bean which are deployed onto the websphere. The JDBC connection pools management since is configured on the websphere and being accessed using the datasources, hibernate does not have to worry about this.
Please help me if I am on the right path with my assumptions ?
Transactions demarcate logical unit of work and hence are inherently isolated. But I am wondering why you need a combination of both. If you are already using EJB2 + JDBC why not stick to this?
Hibernate can be used without any problem with CMT (or BMT) session beans, share a connection pool with JDBC code and participate in the same transaction.
See the whole section 11.2. Database transaction demarcation and in particular 11.2.2. Using JTA.
What is not clear is if the Hibernate module will be "isolated" from the entities managed via JDBC. If you'll access the same tables via both APIs, you'll have to take some precautions:
don't expect to mix JDBC entities in a graph of Hibernate entities (the inverse is possible though).
respect and mimic Hibernate optimistic concurrency strategy when updating rows via JDBC
bypassing Hibernate's API won't trigger any cache update (if you're using the 2nd level cache) in which case you'd have to trigger it yourself.
Here is one of the possible solutions
A common JNDI Datasource, which will be used both in EJB's and Hibernate.

Categories