I am not able to find a way to set TransactionIsolation in ejb. Can anybody tell me how do i set it? I am using persistence.
I have looked the following classes :
EntityManager , EntityManagerFactory, UserTransaction. None of them seems to have any method like setTransactionIsolation or such. Do we need to change persistence.xml?
I just read a book named Mastering EJB 3.0 4th edition. They gave a full 10 page theory about Isolation level that this problems occur and that occurs and such things but at the end they gave this paragraph :-
"As we now know, the EJB standard does not deal with isolation levels directly,
and rightly so. EJB is a component specification. It defines the behavior and
contracts of a business component with clients and middleware infrastructure
(containers) such that the component can be rendered as various middleware
services properly. EJBs therefore are transactional components that interact
with resource managers, such as the JDBC resource manager or JMS resource
manager, via JTS, as part of a transaction. They are not, hence, resource
components in themselves. Since isolation levels are very specific to the
behavior and capabilities of the underlying resources, they should therefore be
specified at the resource API levels. "
What exactly does it mean? What is meant by resource level APIs? Please help me. If persistence has no way to set Isolation Level then why do they give such huge theory in an EJB book and make it heavy in weight unnecessarily :(
As said by EJB specification
Transactions not only make completion of a unit of work atomic, but they also isolate the units of work from each other, provided that the system allows concurrent execution of multiple units of work.
The API for managing an isolation level is resource-manager-specific. (Therefore, the EJB architecture does not define an API for managing isolation levels.)
The Bean Provider must take care when setting an isolation level. Most resource managers
require that all accesses to the resource manager within a transaction are done with the same isolation level.
For session beans and message-driven beans with bean-managed transaction demarcation, the Bean Provider can specify the desirable isolation level programmatically in the enterprise bean’s methods, using the resource-manager specific API. For example, java.sql.Connection.setTransactionIsolation
The container provider should insure that suitable isolation levels are provided to guarantee data consistency for entity beans
Additional care must be taken if multiple enterprise beans access the same resource manager in the same transaction. Conflicts in the requested isolation levels must be avoided.
I hope it can fullfil your needs
See this
Related
Are local transactions and BMT same?
Do we need transactionManager for local transactions?
I read transactionManager will be ineffective for local transactions. is that correct?
JTA give provision for both CMT and BMT?
There is no difference in local or global transaction handling concerning BMT or CMT.
BMT and CMT only define how the start and end of transactions may be defined. In CMT it is defined by the calls of annotated methods, in BMT the start and end of transactions is defined using the UserTransaction-object.
If a global transaction is necessary or better a distributed transaction, then the Transactionmanager will arrange that, independent on BMT or CMT.
These global transactions or two phase commit become necessary as soon as more then one transaction resource is involved. For instance if you use a MessageDriven bean which calls a bean annotated as "Beanmanaged transaction handling" and doing changes in a DBMS. In that case the two-phase commit is done for both resources the message queue and the DBMS.
So, to answer your questions:
No, answer see above.
You can't do transactions with more than one resource without transactionmanager. A container providing distributed transactions as J2EE-Containers normally do, will handle all transactions using a Transactionmanager. At jboss you can configure datasources as "no jta" in that case you explicitly exempt them from two-phase commits, but I think the jboss-transactionmanager will handle the db-connections of such a datasource in spite of that.
Yes, if by "give provision" you mean "supports"
Below are few points for your question in order.
Global transaction support is available to Web and enterprise bean J2EE components and, with some limitations, to application client
components.
Enterprise bean components can be subdivided into two categories: beans that use container-managed transactions (CMT),
and those that use bean-managed transactions (BMT).
A local transaction containment (LTC) is used to define the
application server behavior in an unspecified transaction context.
An LTC is a bounded unit-of-work scope, within which zero, one, or
more resource manager local transactions (RMLT) can be accessed. The
LTC defines the boundary at which all RMLTs must be complete; any
incomplete RMLTs are resolved, according to policy, by the
container. An LTC is local to a bean instance; it is not shared
across beans, even if those beans are managed by the same container.
LTCs are started by the container before dispatching a method on a
J2EE component (such as an enterprise bean or servlet) whenever the
dispatch occurs in the absence of a global transaction context. LTCs
are completed by the container depending on the
application-configured LTC boundary, for example, at the end of the
method dispatch. There is no programmatic interface to the LTC
support; LTCs are managed exclusively by the container and
configured by the application deployer through transaction
attributes in the application deployment descriptor.
A local transaction containment cannot exist concurrently with a
global transaction. If application component dispatch occurs in the
absence of a global transaction, the container always establishes an
LTC for J2EE components at J2EE 1.3 or later.
If an application uses two or more resources, an external transaction manager is needed to coordinate the updates to all resource managers in a global transaction.
for more info. on Transaction Managers : https://access.redhat.com/documentation/en-US/Fuse_ESB_Enterprise/7.1/html/EIP_Transaction_Guide/files/TxnManagers-WhatIs.html
BMT enterprise beans, application client components, and Web components can use the Java Transaction API (JTA) UserTransaction
interface to define the demarcation of a global transaction. To
obtain the UserTransaction interface, use a Java Naming and
Directory Interface (JNDI) lookup of java:comp/UserTransaction, or
use the getUserTransaction method from the SessionContext object.
The UserTransaction interface is not available to CMT enterprise
beans.If CMT enterprise beans attempt to obtain this interface, an
exception is thrown, in accordance with the Enterprise JavaBeans
(EJB) specification.
A Web component or enterprise bean (CMT or BMT) can get the
ExtendedJTATransaction interface through a lookup of
java:comp/websphere/ExtendedJTATransaction. This interface
provides access to the transaction identity and a mechanism to
receive notification of transaction completion.
I am trying to understand the transactions, and more specifically i am doing it using Spring framework. Going through the material that i have (both internet and books), i see these terminologies:
Container managed transactions (CMT).
Bean Managed transactions (BMT).
Java transaction API (JTA).
Also, for a large enterprise level application, terminologies like "local" and "global" transactions too i have encountered.
What i have understood, is that global transactions is for the cases in which we are managing two or more different resouces (like One Oracle DB, other MySQL etc) and commits/rollback to them happen if both are success/failure. Local transactions is for when we have only one resource to manage (like only one DB connection to MySQL).
I have following doubts:
While writing a simple JDBC standalone program, we can code to commit/rollback. I believe this is example of local transaction, is this correct? And is this transaction taken care by "JDBC" library itself or the drivers?
What is CMT? What i understand is that Container takes care of the transaction management (like Jboss AS). If this is correct, how internally does it manages? Are CMT local or global? Do they use JDBC API's internally to manage the transactions?
What is BMT? This i am not able to understand. We can have application deployed in an App server (and i believe in those cases the transaction would be managed by Container) or i can write an standalone application in which i can use the transaction management code myself (try .. catch block etc). So, which category BMT falls?
While reading Spring framework, i read that Spring framework have its own way of managing the transaction. Down the line, i read Spring Framework uses JTA to manage transactions? I am confused about this. If JTA is API, do we have multiple vendors who implement JTA (like for JPA, we can have multiple vendors who provide the implementation e.g. Hibernate).
Is JTA for local or global transactions? When i write a simple JDBC program to manage transaction, does it use API's which comply JTA? Or is JTA totally different from transaction management which simple JDBC program uses?
Same for CMT, does CMT follow the rules which JTA has in place (the API's primarily)?
It would be great if you answer point-wise, i did search on net, my doubts are still not answered.
Concerning local/global transaction: by global I suppose you are talking about XA transaction (2 phase commits : http://en.wikipedia.org/wiki/X/Open_XA). This is just mandatory when you deal with multiple databases or transactionnal resources.
yes it's a "local transaction", it means only one database is
part of the transaction. The transaction is managed by the database
and controlled by JDBC in that case.
CMT : Container Managed: the container will detect the start and
the end of the transaction and will perform commit/rollback
depending on method return status (successfull return : commit,
exception : rollback). CMT rely on JTA to manage transactions on its
resources. Then it's up to the proper resource adapters (RA) to talk
to jdbc or any other drivers related to your EIS (data). Look at
http://docs.oracle.com/javaee/5/tutorial/doc/bncjx.html
BMT: it means it's up to the bean to control transaction
boundaries. It's pretty rare to use this kind of transaction
management these days. With BMT you control the UserTransaction
object, it's an error to control transaction boundaries directly
with JDBC. Bear in mind also that even if you are in BMT some
framework like JPA (not JTA) will invalidate current transaction on
any error even if you did not explicitely requested a rollback. It
means it's quite hard/dangerous/useless to use BMT.
JTA (I hope you did not mispelled JPA) is at another level: JTA
the API a resource adapter must implement to be part of a container
transaction. Apart from UserTransaction class (what you'll use in
BMT to control transaction boundaries) you have nothing to do with
JTA. There is no multiple implementation of JTA but there is multiple implementation of JTS (Java Transaction Service), each application server vendor implements their own JTS.
JTA is a API for framework designer, JTA impose a contract to the
Resource Adapter RA, and the RA will use JDBC or any other API to
deal with it's EIS (Enterprise Information Storage, let's call it
your DB). JTA works for XA and non XA transactions (if your RA supports XA transactions).
CMT uses JTA, but again JTA is a lowlevel contract between
components of the application server. Application designer should
not care about JTA.
I think this is a fairly common question: how to put my business logic in a global transaction in distributed systems environment? Cite an example, I have a TaskA containing couples of sub tasks:
TaskA {subtask1, subtask2, subtask3 ... }
each of these subtasks may execute on local machine or a remote one, I hope TaskA executes in an atomic manner(success or fail) by means of transaction. Every subtask has a rollback function, once TaskA thinks the operation fails(because one of subtask fails), it calls each rollback function of subtasks. Otherwise TaskA commits the whole transaction.
To do this, I follow "Audit trial" transaction pattern to have a record for each subtask, so TaskA can know operation results of subtasks then decide rollback or commit. This sounds like simple, however, the hard part is how to associate each subtask to the global transaction?
When TaskA begins, it starts a global transaction about which subtask knows nothing. To make subtask aware of it, I have to pass the transaction context to every invocation of subtask. This is really dreadful! My subtask may either execute in a new thread or execute in remote by sending a message through AMQP broker, it's hard to consolidate the way of context propagation.
I did some research like "Transaction Patterns - A Collection of Four Transaction Related Patterns", "Checked Transactions in an Asynchronous Message Passing Environment", none of these solve my problem. They either don't have practical example or don't solve the context propagation issue.
I wonder how people solve this? as this sort of transaction must be common in enterprise software.
Is X/Open XA only the solution for this? Can JTA help here(I have't look into JTA as most stuff for it relate to database transaction, and I am using Spring, I don't want to involve another Java EE application server in my software).
Can some expert share some thoughts with me? thank you.
Conclusion
Arjan and Martin gave out really good answers, thank you.
Finally I didn't go with this way. After more research, I chose another pattern "CheckPoint" 1.
Looking into my requirement, I found my intention to "Audit Trial Transaction Pattern" is to know which level the operation has proceeded to, if it's failed, I can restart it at the failed spot after reloading some context. Actually this is not transaction, it didn't rollback other successful steps after failure. This is essence of CheckPoint pattern.
However, studying distributed transaction stuff makes me learned lot of interesting things. Apart from what Arjan and Martin have mentioned. I also suggest people digging into this area take a look at CORBA which is a well-known protocol for distributed system.
You're right, you need two-phase commit support thanks to a XA transaction manager provided by the JTA API.
As far as I know Spring does not provide a transaction manager implementation itself. The JtaTransactionManager only delegates to existing implementation usually provided from JavaEE implementations.
So you will have to plugin a JTA implementation into Spring to get effective job done. Here are some proposals:
JOTM
JBossTS based on Arjuna.
Atokimos
Then you will have to implement your resource manager to support two-phase commit. In the JavaEE worlds, it consists in a resource adapter packaged as a RAR archive. Depending on the type of resource, the following aspects are to read/implement:
XAResource interface
JCA Java Connector Architecture mainly if a remote connection is involved
JTS Transaction Service if transaction propagation between nodes is required
As examples, I recommend you to look at implementations for the classical "transactions with files" issue:
the transactional file manager from JBoss Transactions
XADisk project
If you want to write your own transactional resource, you indeed need to implement an XAResource and let it join an ongoing transaction, handle prepare and commit requests from the transaction manager etc.
Datasources are the most well known transactional resources, but as mentioned they are not the only ones. You already mentioned JMS providers. A variety of Caching solutions (e.g. Infinispan) are also transactional resources.
Implementing XAResources and working with the lower level part of the JTA API and the even lower level JTS (Java Transaction Service) is not a task for the faint of heart. The API can be archaic and the entire process is only scarcely documented.
The reason is that regular enterprise applications creating their own transactional resources is extremely rare. The entire point of being transactional is to do an action that is atomic for an external observer.
To be observable in the overwhelming majority of cases means the effects of the action are present in a database. Nearly each and every datasource is already transactional, so that use case is fully covered.
Another observable effect is whether a message has been send or not, and that too is fully covered by the existing messaging solutions.
Finally, updating a (cluster-wide) in memory map is yet another observable effect, which too is covered by the major cache providers.
There's a remnant of demand for transactional effects when operating with external enterprise information systems (EIS), and as a rule of thumb the vendors of such systems provide transaction aware connectors.
The shiver of use cases that remain is so small that apparently nobody ever really bothered to write much about it. There are some blogs out there that cover some of the basics, but much will be left to your own experimentation.
Do try to verify for yourself if you really absolutely need to go down this road, and if your need can't be met by one of the existing transactional resources.
You could do the following (similar to your checkpoint strategy):
TaskA executes in a (local) JTA transaction and tries to reserve the necessary resources before it delegates to your subtasks
Subtask invocations are done via JMS/XA: if step 1 fails, no subtask will ever be triggered and if step 1 commits then the JMS invocations will be received at each subtask
Subtasks (re)try to process their invocation message with a JMS max redelivery limit set (see your JMS vendor docs on how to do this)
Configure a "dead letter queue" for any failures in 3.
This works, assuming that:
-retrying in step 3 makes sense
-there is a bit of human intervention needed for messages in the dead letter queue
If this is not acceptable then there is also TCC: http://www.atomikos.com/Main/DownloadWhitepapers?article=TccForRestApi.pdf - this can be seen as a "reservation" pattern for REST services.
Hope this helps
Guy
http://www.atomikos.com
I am currently having a problem with understanding a concept of JPA.
I am currently using/developing recent EclipseLink, Glassfish, Derby database to demonstrate a project.
Before I develop something in much bigger picture, I need to be absolutely sure of how this PersistingUnit work in terms of different scopes.
I have bunch of servlets 3.0 and currently saving user's associated entity classes in the request.session object (everything in the same war file). I am currently using Application-managed EntityManager using EntityManagerFactory and UserTransaction injection. It works smooth when it is tested by myself. The different versions of entities occur when 2 people accessing the same entities at the same time. I want to work with managed beans cross the same WAR, same persistence unit if possible.
I have read http://docs.oracle.com/javaee/6/tutorial/doc/bnbqw.html and bunch of explanations of those scopes which don't make sense at all for me.
Long story short, what are the usage and difference of app and container managed EntityManagers?
When you say application managed transaction it means its your code which is supposed to handle the transaction. In a nutshell it means:
You call:
entityManager.getTransaction().begin(); //to start a transaction
then if success you will ensure to call
entityManager.getTranasaction().commit(); //to commit changes to database
or in case of failure you will make sure to call:
entityManager.getTransaction().rollBack();
Now imagine you have a container, which knows when to call begin(), commit() or rollback(), thats container managed transaction. Someone taking care of transaction on your behalf.
You just need to specify that.
Container managed transaction(CMT) could be regarded as a kind of declarative transaction, in which case, transaction management is delegated to container (normally EJB container), and much development work could be simplified.
If we are in a Java EE environment with an EJB container, we could use CMT directly.
If we are in a Java SE environment, or a Java EE environment without an EJB container, we could still take advantage of CMT, one way is to use Spring, which uses AOP to implement declarative transaction management; Another way is to use Guice, which uses a PersistFilter to implement declarative transaction.
In CMT, a container (whatever an EJB container, Spring or Guice) will take care of the transaction propagation and commit/rollback stuff;
Application managed transaction (AMT) differs from CMT in that we need to handle transactions programmatically in our code.
is it possible to implement one of the former Transaction Manager if the datasource is not callable via JDBC?
Edited
I want to create an addin for an existing application. My addin shall be responsible for logging of the read and write accesses of long run workflow transactions. My addin should be additionally responsible for caching variables in case that they are required - so that there should not neccessarily be a read/write operation everytime a variable is accessed.
The application is running in a Tomcat6 environment and I get the data by calling a plugin manager ( which holds gets data from the different datasources ).
Do you know any links which I could read - or maybe know of some existing solution?
Sounds like you still have not fully grasped the distinction between transaction manager and resource manager. Transactions managers like JBossTS drive resource managers like Oracle, MSSQL etc, via XAResources supplied by the RM's drivers.
You're not implementing a transaction manager - it's already implemented. You're implementing a new resource manager and using the existing transaction manager to drive it. Read the XA specification, then implement XAResource and enlist your resource with the transaction manager. As long as your impl is spec compliant the transaction manager will use it just the same way it does with the implementations supplied by database drivers or message queues.
Note that doing I/O to external (i.e. non-transactional) systems in ACID transaction scope is basically impossible. The best you can hope for is some form of compensation based model or 1PC behaviour with last resource commit optimization.