This page from the Hibernate tutorial on jboss.org says:
Hibernate offers three methods of current session tracking. The "thread" based method is not intended for production use; it is merely useful for prototyping and tutorials such as this one.
I haven't been able to find any other sources indicating one way or the other.
Is this true? If so, why, and is it still true for Hibernate 4.x? And what context sessions are intended for production use?
Hibernate supports these session management options:
(jta) org.hibernate.context.JTASessionContext: current sessions are tracked and scoped by a JTA transaction. The processing here is exactly the same as in the older JTA-only approach.
(thread) org.hibernate.context.ThreadLocalSessionContext:current sessions are tracked by thread of execution.
(managed) org.hibernate.context.ManagedSessionContext: current sessions are tracked by thread of execution. However, you are responsible to bind and unbind a Session instance with static methods on this class: it does not open, flush, or close a Session.
See: http://docs.jboss.org/hibernate/orm/5.0/userGuide/en-US/html_single/#architecture-current-session
Older implementations that used Hibernate directly often use a "thread per session" model even though they aren't recommended now.
Related
I think this is a fairly common question: how to put my business logic in a global transaction in distributed systems environment? Cite an example, I have a TaskA containing couples of sub tasks:
TaskA {subtask1, subtask2, subtask3 ... }
each of these subtasks may execute on local machine or a remote one, I hope TaskA executes in an atomic manner(success or fail) by means of transaction. Every subtask has a rollback function, once TaskA thinks the operation fails(because one of subtask fails), it calls each rollback function of subtasks. Otherwise TaskA commits the whole transaction.
To do this, I follow "Audit trial" transaction pattern to have a record for each subtask, so TaskA can know operation results of subtasks then decide rollback or commit. This sounds like simple, however, the hard part is how to associate each subtask to the global transaction?
When TaskA begins, it starts a global transaction about which subtask knows nothing. To make subtask aware of it, I have to pass the transaction context to every invocation of subtask. This is really dreadful! My subtask may either execute in a new thread or execute in remote by sending a message through AMQP broker, it's hard to consolidate the way of context propagation.
I did some research like "Transaction Patterns - A Collection of Four Transaction Related Patterns", "Checked Transactions in an Asynchronous Message Passing Environment", none of these solve my problem. They either don't have practical example or don't solve the context propagation issue.
I wonder how people solve this? as this sort of transaction must be common in enterprise software.
Is X/Open XA only the solution for this? Can JTA help here(I have't look into JTA as most stuff for it relate to database transaction, and I am using Spring, I don't want to involve another Java EE application server in my software).
Can some expert share some thoughts with me? thank you.
Conclusion
Arjan and Martin gave out really good answers, thank you.
Finally I didn't go with this way. After more research, I chose another pattern "CheckPoint" 1.
Looking into my requirement, I found my intention to "Audit Trial Transaction Pattern" is to know which level the operation has proceeded to, if it's failed, I can restart it at the failed spot after reloading some context. Actually this is not transaction, it didn't rollback other successful steps after failure. This is essence of CheckPoint pattern.
However, studying distributed transaction stuff makes me learned lot of interesting things. Apart from what Arjan and Martin have mentioned. I also suggest people digging into this area take a look at CORBA which is a well-known protocol for distributed system.
You're right, you need two-phase commit support thanks to a XA transaction manager provided by the JTA API.
As far as I know Spring does not provide a transaction manager implementation itself. The JtaTransactionManager only delegates to existing implementation usually provided from JavaEE implementations.
So you will have to plugin a JTA implementation into Spring to get effective job done. Here are some proposals:
JOTM
JBossTS based on Arjuna.
Atokimos
Then you will have to implement your resource manager to support two-phase commit. In the JavaEE worlds, it consists in a resource adapter packaged as a RAR archive. Depending on the type of resource, the following aspects are to read/implement:
XAResource interface
JCA Java Connector Architecture mainly if a remote connection is involved
JTS Transaction Service if transaction propagation between nodes is required
As examples, I recommend you to look at implementations for the classical "transactions with files" issue:
the transactional file manager from JBoss Transactions
XADisk project
If you want to write your own transactional resource, you indeed need to implement an XAResource and let it join an ongoing transaction, handle prepare and commit requests from the transaction manager etc.
Datasources are the most well known transactional resources, but as mentioned they are not the only ones. You already mentioned JMS providers. A variety of Caching solutions (e.g. Infinispan) are also transactional resources.
Implementing XAResources and working with the lower level part of the JTA API and the even lower level JTS (Java Transaction Service) is not a task for the faint of heart. The API can be archaic and the entire process is only scarcely documented.
The reason is that regular enterprise applications creating their own transactional resources is extremely rare. The entire point of being transactional is to do an action that is atomic for an external observer.
To be observable in the overwhelming majority of cases means the effects of the action are present in a database. Nearly each and every datasource is already transactional, so that use case is fully covered.
Another observable effect is whether a message has been send or not, and that too is fully covered by the existing messaging solutions.
Finally, updating a (cluster-wide) in memory map is yet another observable effect, which too is covered by the major cache providers.
There's a remnant of demand for transactional effects when operating with external enterprise information systems (EIS), and as a rule of thumb the vendors of such systems provide transaction aware connectors.
The shiver of use cases that remain is so small that apparently nobody ever really bothered to write much about it. There are some blogs out there that cover some of the basics, but much will be left to your own experimentation.
Do try to verify for yourself if you really absolutely need to go down this road, and if your need can't be met by one of the existing transactional resources.
You could do the following (similar to your checkpoint strategy):
TaskA executes in a (local) JTA transaction and tries to reserve the necessary resources before it delegates to your subtasks
Subtask invocations are done via JMS/XA: if step 1 fails, no subtask will ever be triggered and if step 1 commits then the JMS invocations will be received at each subtask
Subtasks (re)try to process their invocation message with a JMS max redelivery limit set (see your JMS vendor docs on how to do this)
Configure a "dead letter queue" for any failures in 3.
This works, assuming that:
-retrying in step 3 makes sense
-there is a bit of human intervention needed for messages in the dead letter queue
If this is not acceptable then there is also TCC: http://www.atomikos.com/Main/DownloadWhitepapers?article=TccForRestApi.pdf - this can be seen as a "reservation" pattern for REST services.
Hope this helps
Guy
http://www.atomikos.com
is it possible to implement one of the former Transaction Manager if the datasource is not callable via JDBC?
Edited
I want to create an addin for an existing application. My addin shall be responsible for logging of the read and write accesses of long run workflow transactions. My addin should be additionally responsible for caching variables in case that they are required - so that there should not neccessarily be a read/write operation everytime a variable is accessed.
The application is running in a Tomcat6 environment and I get the data by calling a plugin manager ( which holds gets data from the different datasources ).
Do you know any links which I could read - or maybe know of some existing solution?
Sounds like you still have not fully grasped the distinction between transaction manager and resource manager. Transactions managers like JBossTS drive resource managers like Oracle, MSSQL etc, via XAResources supplied by the RM's drivers.
You're not implementing a transaction manager - it's already implemented. You're implementing a new resource manager and using the existing transaction manager to drive it. Read the XA specification, then implement XAResource and enlist your resource with the transaction manager. As long as your impl is spec compliant the transaction manager will use it just the same way it does with the implementations supplied by database drivers or message queues.
Note that doing I/O to external (i.e. non-transactional) systems in ACID transaction scope is basically impossible. The best you can hope for is some form of compensation based model or 1PC behaviour with last resource commit optimization.
I have a web-app, which I deploy on Tomcat 6 and it uses Hibernate.
It receives messages on a JMS queue which trigger changes both to my DB, via Hibernate and to an Object of mine (Agent).
The web-requests also access the DB, via Hibernate, and access the shared object (there's a ConcurrentHashMap<AgentId,Agent> held by a singleton).
My problem is that I have a JMS message which changes several different Agents and several tables and I need the changes in the Agents to be available if and only if the DB transaction completed successfully. In addition I do not want to employ read locks as that is too much of a performance hazard for me.
I was thinking of somehow implementing the XAResource interface for my singleton and then use JTA to manage both my singleton and my Hibernate transaction.
What do you think? Does it sound reasonable? Am I way off?
If any additional details are needed please don't hesitate to ask
Ittai
Instead of implementing XAResource, you could use a transactional cache like EHCache which supports JTA since 2.0 (i.e. it can act as an XA resource and participate in a XA transaction alongside other XA resources).
Is it correct to say that using JTA Transactions with Hibernate contrasts using the Open-Session-In-View with regards to the session scope?
From what I've been able to gather the Session scope in the JTA Transactions is a transaction (mainly based on this link) while in the Open-Session-In-View pattern the session's scope is the requrest and you can have multiple transactions in it.
I'm asking, first to understand, and secondly to verify "Who" is responsible for the session handling when using JTA.
Currently, when using the Open-Session-In-View, I have a HibernateUtil class which handles opening, retrieving and closing of sessions (via ThreadLocal<Session>).
When I'll switch over to using JTA will Hibernate handle the above session actions? (as a derivative maybe of my calling userTransaction.begin,userTransaction.rollback)
BTW, I'm asking about JTA as I need to coordinate a transaction across Hibernate JMS and EHCache so this isn't a general best-practices "lets-use-JTA" question.
Ittai
Well, if you're using JTA then the JTA manager (usually EJB3 container) is responsible for transactions.
Typically, the same good old open-transaction-in-view model is used, however with UserTransaction and, say, a SWING client it's possible to have long-lasting transactions spanning multiple requests (though it's a bad practice in general).
BTW, I'm asking about JTA as I need to coordinate a transaction across Hibernate JMS and EHCache so this isn't a general best-practices "lets-use-JTA" question.
Good luck. I found that external transaction manager (I've used Atomikos) + Spring worked better for my needs than JBoss.
when using hibernate with spring, can someone explain how the session unit of work and transactions are handled?
is the transaction started at the beginning of the page request, and committed at the end?
can I have multiple db calls per request, that each have different transaction levels? e.g. some are left as default, while others are read-uncommitted?
is the transaction started at the beginning of the page request, and committed at the end?
In a web application, opening / closing the Session is usually done using the "Open Session in View" pattern. Spring comes with a OpenSessionInViewFilter or an OpenSessionInViewInterceptor for this. Both make Hibernate Sessions available via the current thread, which will be autodetected by transaction managers. It is suitable for service layer transactions via HibernateTransactionManager or JtaTransactionManager as well as for non-transactional execution (if configured appropriately).
Transaction demarcation is usually done at the service methods level, using Spring AOP to wrap them inside transactions.
can I have multiple db calls per request, that each have different transaction levels? e.g. some are left as default, while others are read-uncommitted?
You can have nested transactions with different isolation levels. Refer to the Transaction Management chapter.
It's usually configured declaratively with aspect oriented programming (AOP). You can define what beans, classes, packages or methods require transactions and Spring will provide it in a fashion similar to EJBs. Thanks to AOP you have full control over what exactly and how is wrapped into transactions.
Read more about it here: http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/transaction.html#transaction-declarative
to your questions:
1-is the transaction started at the
beginning of the page request, and
committed at the end?
Not exactly. the usual workflow of spring MVC is:
requestDispatcher->Controller->Service call(transaction starts and ends here)
Services may invoke Daos, Daos will talk to Datastore via Hibernate.
A transaction can continue living after the http response. e.g. service run in a thread.
2-can I have multiple db calls per
request, that each have different
transaction levels? e.g. some are left
as default, while others are
read-uncommitted?
Yes certainly you can.
let's say, your application does a migration job. a request says "start migration!" Then your service will read data via source database, and do some magic base on your migration logic, finally write to target database, and commit the transaction.