we got a requirement of create a transactional operation but handled through multiple applications.
I'm familiar with #Transactional annotation to achieve transactions, or for example using it programmatically like:
#Autowired
private EntityManagerFactory emf;
public void doSomething() {
EntityManager em = emf.createEntityManager();
EntityTransaction tx = em.getTransaction();
tx.begin();
em.createNativeQuery("...do something...").executeUpdate();
tx.commit();
em.close();
}
So, if I need to allow an app to create a transaction dynamically I could pass em as parameter and populate multiple operations from different methods.
However, we got a weird requirement to achieve something like above code but involving multiple applications that share the same database, it would be like a distributed transaction.
Is this feasible with Spring? I'm not sure if I might create some rest services where one of the parameters are the serialized entity managers, although this looks very weird to me. Is it possible to handle like a shared transaction populating operations through multiple apps?
You may involve XA Transactions over JTA. You can use Spring support for JTA which implies use of particular JTA Transaction Manager implementation under the hood such as Atomikos (paid) or Bitronix (Apache-licensed, issues with maintenance now).
Some good overview is also available here, especially describing the alternatives to XA, including description of eventual consistency
Spring Boot + Atomikos sample on Github can demonstrate basics of JTA configuration within single application.
Once you get familiar with all of that, you may have similar question as in this thread. Note, that latter is raised 4 years ago and nothing has changed drastically since that time, so the answers there are still good.
In general, development and implementation of distributed ACID transactions across several applications/services can be a really challenging epic, often comparable by level of effort with re-architecting the whole solution in order to avoid XA usage at all.
Related
I am just learning JTA and can't understand if I should use it if I have only one database. Currently I use hibernate 5 as JPA provider and if I need to use one transaction between methods I just pass EntityManager as argument.
However, I don't like this method as I need to remember if transaction is opened or not. I would like to find any library that will help me to control transactions (but without Spring) in SE environment. So, should I use JTA in my situation or should I use something different?
Normally when talking about JTA , it refers to the distributed transaction across multiple systems (e.g. across two databases , one database and one JMS compliant message broker). If you only have one database , it is not necessary to use JTA transaction although it should also work. Instead, use a local transaction for one database which should theoretically faster than using JTA transaction.
On the other hands , if you are just talking about #Transactional defined in the JTA , which allow you to declaratively control the transaction boundary by annotating it on a method and without passing the EntityManager between methods , you should look into which frameworks support it.
Under the cover , a proxied EntityManager is injected into different classes such that when it is invoked , it will get the actual EntityManager from the ThreadLocal. So every class in the same thread will get the same EntityManager which prevent you from passing the EntityManager around the methods. A new EntityManager instance will be set to the ThreadLocal just before the #Transactional method is executed using some sort of AOP technique.
Please note that #Transcational is nothing to do with the underlying transaction is JTA or local transaction. It should work for both transaction type.
I would use a framework that support #Transactional such as Spring , Quarkus or Micronaut etc , and configure Hibernate to use local transaction but not JTA transaction for single database.
I had tried to terminate that by myself, but i can't.
I need to save objects in multiple relative tables of my database and it must be in one single transaction.
I am using Servlets, JSP, JDBC.
(already have the dao layer and service layer)
As we know, the transactions always must to be in the service layer.
When I was using Spring MVC, I always used this annotation in services:
#Transactional
and had options for TransactionManager in my spring.xml
Now I need to do the same with servlets.
Can anyone help me with some small example for servlets transactions or maybe somebody have suggestive thoughts for this?
You have different ways to manage transactions at JDBC level.
The simplest way, is a filter: you open a transaction at the beginning or the request processing and commit (or rollback) it at the end. It is as little invasive as possible in other layers, but you cannot have the transaction demarcation at the service layer.
At the opposite, you can add code to explicitely create and commit transactions in all (relevant) service methods. You can put real code in common methods to limit code duplication, but you will have to consistently modify all your service layer.
An alternate way, as you have an existant service layer, would be to mimic Spring and use proxies around your service classes. The proxies would open create transaction, call the real method and commit the transaction. IMHO, it would still be a little invasive method with little code duplication.
My choice would be to use method 1 for very simple use cases or prototyping and method 3 for more serious ones - but this is just my opinion.
I think first of all you need to understand which specification you would like to work with and then figure out how to do the integration.
There are so many techniques and technologies in Java to access the database.
In general if you want to access the DB at the lowest layer (JDBC) you'll have to manage transactions by yourself.
This link can be useful, because it provides a lot of examples. In general you have setAutoCommit(false)' and thenrollback/commitonConnection` jdbc interface.
If you wish to use stuff like hibernate (note, you still don't need spring for this) - the Transaction inteface can be handy.
Here is the Example
Spring as an integration framework allows using transaction management by means of definition of relevant beans, so you kind of chose by yourself which transaction management technology should be used.
This is a broad topic, you might be interested to read This to understand more the spring way to manage transactions.
In general, JDBC is the most low level of accessing the database in java, all other APIs are built on top of it.
Hope this helps
On your Service Method you should handle transaction yourself, you'll find below an example:
try {
dbConnection = getDBConnection();
dbConnection.setAutoCommit(false);
// do your database work preparedStatement Insert, Update
//OR
// If you are doing your work on DAO you can pass connection to your DAO
//XDao xDao = new XDao(dbConnection);
//YDao yDao = new YDao(dbConnection);
//xDao.doWork();
//yDao.doWork()
dbConnection.commit();
System.out.println("Done!");
} catch (SQLException e) {
System.out.println(e.getMessage());
dbConnection.rollback();
} finally {
//Close prepared statements
//close connection
if (dbConnection != null) {
dbConnection.close();
}
}
For advanced Pattern and uderstanding, I recommend this blog post here
I'm trying to wrap my head around the value underneath the Java Transactions API (JTA) and one of its implementations, Bitronix. But as I dig deeper and deeper into the documentation, I just can't help but think of the following, simple example:
public interface Transactional {
public void commit(Object);
public void rollback();
}
public class TransactionalFileWriter extends FileWriter implements Transactional {
#Override
public void commit(Object obj) {
String str = (String)obj;
// Write the String to a file.
write(str);
}
#Override
public void rollback() {
// Obtain a handler to the File we are writing to, and delete the file.
// This returns the file system to the state it was in before we created a file and started writing to it.
File f = getFile();
// This is just pseudo-code for the sake of this example.
File.delete(f);
}
}
// Some method in a class somewhere...
public void doSomething(File someFile) {
TransactionalFileWriter txFileWriter = getTxFW(someFile);
try {
txFileWriter.commit("Create the file and write this message to it.");
} catch(Throwable t) {
txFileWriter.rollback();
}
}
Don't get too caught up in the actual code above. The idea is simple: a transactional file writer that creates a file and writes to it. It's rollback() method deletes the file, thus returning the file system to the same state it was in before the commit(Object).
Am I missing something here? Is this all the JTA offers? Or is there a whole different set of dimensionality/aspects to transactionality that isn't represented by my simple example above? I'm guessing the latter but have yet to see anything concrete in the JTA docs. If I am missing something, then what is it, and can someone show me concrete examples? I can see transactionality being a huge component of JDBC but would hopefully like to get an example of JTA in action with something other than databases.
As every one else has mentioned, the primary benefit of JTA is not the single transaction case, but the orchestration of multiple transactions.
Your "Transactional File" is an excellent, conceptual, example when used in the proper context.
Consider a contrived use case.
You're uploading a picture that has associate meta data and you want to then alert the infrastructure that the file as arrived.
This "simple" task is fraught with reliability issues.
For example, this workflow:
String pathName = saveUploadedFile(myFile);
saveMetaData(myFile.size(), myFile.type(), currentUser, pathName);
queueMessageToJMS(new FileArrivalEvent(user, pathName);
That bit of code involves the file system and 2 different servers (DB and JMS).
If the saveUploadedFile succeeds, but the saveMetaData does not, you now have a orphaned file on the file system, a "file leak" so to speak. If the saveMetaData succeeds, but the queue does not, you have saved the file, but "nobody knows about it". The success of the transaction relies upon all 3 components successfully performing their tasks.
Now, throw in JTA (not real code):
beginWork();
try {
String pathName = saveUploadedFile(myFile);
saveMetaData(myFile.size(), myFile.type(), currentUser, pathName);
queueMessageToJMS(new FileArrivalEvent(user, pathName);
} catch(Exception e) {
rollbackWork();
} finally {
commitWork();
}
Now it "all works", or "none of it works".
Normally folks jump through hoops to make this kind of thing work safely, since most systems do not have transaction managers. But with a transaction manager (i.e. JTA), you the TM manages all of the hoops for you, and you get to keep your code clean.
If you survey the industry you will find very few transaction managers. Originally they were proprietary programs used by "Enterprise" grade systems. TIBCO is a famous one, IBM has one, Microsoft has one. Tuxedo used to be popular.
But with Java, and JTA, and the ubiquitous Java EE (etc) servers "everyone" has a transaction manager. We in the Java world get this orchestration for "free". And it's handy to have.
Java EE made transaction managers ubiquitous, and transaction handling a background consideration. Java EE means "never having to write commit() again". (Obviously Spring offers similar facilities).
For most systems, it's not necessary. That's why most people don't know much about it, or simply don't miss it. Most systems populate a single database, or simply don't worry about the issues surrounding orchestration of multiple systems. The process can be lossy, they have built in their own clean up mechanisms, whatever.
But when you need it, it's very nice. Committing to multiple system simultaneously cleans up a lot of headaches.
The biggest feature of JTA is that you can compose several transactional stores in one application and run transactions that span across these independent stores.
For instance, you can have a DB, a distributed transactional key-value store and your simple FileWriter and have a transaction that performs operations on all of these and commit all the changes in all the stores at once.
Take a look at infinispan. That's a transactional data grid, it uses JTA and can be used in combination with other JTA transactional services.
Edit:
Basically JTA is connected to the X/Open XA standard and it provides means to interact with X/Open XA resources directly in Java code. You can use alredy existing data-stores which hold X/Open XA compliant resources, such as databases, distributed data-grids and so on. Or you can define your own resources by implementing javax.transaction.xa.XAResource. Then, when your User transaction uses these resources, the transaction manager will orchestrate everything for you, no matter where the resources are located, in which data-store.
The whole bussiness is managed by the transaction manager which is responsible for synchronizing independent data-stores. JTA doesn't come with a transaction manager. JTA is just an API. You could write your own if you wish to (javax.transaction.TransactionManager), but obviously that's a difficult task. Instead what you want is to use some already implemented JTA service/library which features a transaction manager. For instance, if you use infinispan in your application you can use its transaction manager to allow your transactions to interact with different data-stores as well. It's best to seek further information on how to accomplish this from the implementators of JTA interface.
You can find full JTA API documentation here, though it's pretty long. There are also some tutorials available that talk about how to use Java EE Transaction Manager and update multiple data-stores but it is pretty obscure and doesn't provide any code samples.
You can check out Infinispan's documentation and tutorials, though I can't see any example that would combine Infinispan with other data-store.
Edit 2:
To answer your question from the comment: your understanding is more or less correct, but I'll try to clarify it further.
It'll be easier to explain the architecture and answer your question with a picture. The below are taken from the JTA spec 1.1
This is the X/Open XA architecture:
Each data-store (a database, message queue, SAP ERP system, etc) has its own resource manager. In case of a relational database, the JDBC driver is a resource adapter that represents the Resource Manager of the database in Java. Each resource has to be available through the XAResource interface (so that Transaction Manager can manage them even without knowing the implementation details of a specific data-store).
Your application communicates with both the Resource Managers (to get access to the specific resources) by the resource adapters, as well with the Transaction Manager (to start/finish a transaction) by the UserTransaction interface. Each Resource Manager needs to be initialized first and it has to be configured for global transactions (i.e. spanning across several data-stores).
So basically, yes, data-stores are independent logical units that group some resources. They also exhibit interface that allows to perform local transactions (confined to that specific data-store). This interface might be better-performing or might expose some additional functionality specific to that data-store which is not available through the JTA interface.
This is the architecture of JTA environment:
The small half-circle represents the JTA interface. In your case you're mostly interested in the JTA UserTransaction interface. You could also use EJB (transactional beans) and the Application Server would manage transactions for you, but that's a different way to go.
From the transaction manager’s perspective, the actual implementation of the
transaction services does not need to be exposed; only high-level interfaces need to be
defined to allow transaction demarcation, resource enlistment, synchronization and
recovery process to be driven from the users of the transaction services.
So the Transaction Manager can be understood as an interface which only represents the actual mechanism used to manage transactions such as JTS implementation, but thinking about it as a whole is not an error neither.
From what I understand, if you run for instance a JBoss application server, you're already equipped with a Transaction Manager with the underlying transaction service implementation.
I am new to JTA and I need a method to retrieve some some elements from the database. I can do this through EntityManager but that works only for ResourceLocal.
I want to know how can I do this:
Query q = em.createNamedQuery("AnyQuery");
q.getResultList();
without the use of EntityManager.
Any ideas?
The question itself shows that you don't understand any of the technologies you try to work with. You probably need to study some more general stuff before you do any actual development.
you are probably confusing JTA and JPA,
your statement about RESOURCE_LOCAL is not true (and irrelevant) - there are JTA and RESOURCE_LOCAL transactions and in Java EE you usually use the former,
your thought of using Named JPA queries without EntityManager is plain absurd and probably stems from misunderstanding of some kind (what would be the point of using named queries without an entity manager?),
saying "some elements from database" shows that you can't really tell the difference between records and mapped objects, in which case you probably should not use JPA at all.
I am not really expecting that you accept this answer. That's just my frustration taking over.
EDIT
OK, now that you mentioned JSF I understand more of your problem.
I assume you want to use JPA. In such case you have a choice of:
creating your own EntityManager (in such case you cannot have it injected; instead you have yo use an EntityManagerFactory and build your own). This is an "application managed EntityManager". You don't really want to do this.
using an injected EntityManaged ("conatiner managed EntityManager"). This is the standard choice.
Now you need a transaction. Since you should be using a JTA EntityManager, you will need a transaction object that is responsible for coordinating the whole thing. Again, you have two choices:
in a JSF bean, inject a UserTransaction (using #Resource annotation). This is messy, error-prone and takes a lot of boilerplate, but you will find all the necessary methods. You can create your own (application managed) EntityManager, call its joinTransaction method and then call begin-commit on UserTransaction. This is would be an "application managed transaction"
move your EntityManager code to EJB. It only takes a couple of lines of code and a single annotation (#Statless). All the code inside an EJB is - magically - wrapped inside a transaction that the container manages for you. This is the "container managed transaction" - the default and common choice.
Each of the things above could (and should) be expanded with some additional information. But the short path for you is:
create an EJB (a simple class with #Stateless annotation),
move the method that uses EntityManager to the EJB,
inject the EJB into your managed bean (using #EJB annotation) and call the relevant method.
The JTA transaction will happen around each call to any EJB method. This should get you started :-)
I'm developing a web app with Spring and Hibernate and I was so obsessed by making he application thread safe and being able to support heavy load that based on my boss recommendation I end up writing my own session and a session container to implement a session per request pattern. Plus I have a lot of DAOs and me not willing to write the same save method for all the DAOs I copy paste this Hibernate GenericDAO (I can't tell it's the same thing because at the time hibernate wasn't owned by jboss) and do the plumbing stuff, and under pressure, all become quickly complicated and on production, the is StaleObjectException and duplicated data right, and i have the feeling that it's time to review what I've done, simplify it and make it more robust for large data handling. One thing you should know is that one request involves many DAO's.
There is quartz running for some updates in the database.
As much as I want to tune everything for the better I lack time to do the necessary research plus Hibernate is kind of huge (learning).
So this is it, I'll like to borrow your experience and ask for few question to know what direction to take.
Question 1 : is Hibernate generated uuid safe enough for threading environment and avoiding StaleObjectException?
Question 2 what are best strategy to use hibernate getCurrentSession in threadSafe scenario (I've read about threadlocal stuff but didn't get too much understanding so didn't do it)
Question 3 : will HIbernateTemplate do for the simplest solution approach?
Question 4 : what will be your choice if you were to implement a connection pool and tuning requirement for production server?
Please do no hesitate to point me to blogs or resources online , all that I need is a approach that works for my scenario. your approach if you were to do this.
Thanks for reading this, everybody's idea is welcomed...
I'm developing a web app with Spring and Hibernate and I was so obsessed by making he application thread safe and being able to support heavy load that based on my boss recommendation I end up writing my own session and a session container to implement a session per request pattern.
You should just drop all this code and use Spring/Hibernate APIs instead: less bugs, less maintenance.
I copy paste this Hibernate GenericDAO (I can't tell it's the same thing because at the time hibernate wasn't owned by jboss) and do the plumbing stuff, and under pressure, all become quickly complicated (...)
You can use a GenericDao and inject the required stuff with Spring.
Question 1: is Hibernate generated uuid safe enough for threading environment and avoiding StaleObjectException?
To strictly answer your question, here is what Reference Guide writes about the uuid generator:
5.1.4.1. Generator
...
uuid
uses a 128-bit UUID algorithm to
generate identifiers of type string
that are unique within a network (the
IP address is used). The UUID is
encoded as a string of 32 hexadecimal
digits in length.
So I consider it as safe. But I think your StaleObjectException are unrelated (it's another problem).
Question 2: what are best strategy to use hibernate getCurrentSession in threadSafe scenario (I've read about threadlocal stuff but didn't get too much understanding so didn't do it)
The best strategy is to just use it, sessionFactory.getCurrentSession() will always give you a Session scoped to the current database transaction aka a "contextual session". Again, quoting the Reference Documentation:
2.5. Contextual sessions
Most applications using Hibernate need
some form of "contextual" session,
where a given session is in effect
throughout the scope of a given
context. However, across applications
the definition of what constitutes a
context is typically different;
different contexts define different
scopes to the notion of current.
Applications using Hibernate prior to
version 3.0 tended to utilize either
home-grown ThreadLocal-based
contextual sessions, helper classes
such as HibernateUtil, or utilized
third-party frameworks, such as Spring
or Pico, which provided
proxy/interception-based contextual
sessions.
(...)
However, as of version 3.1, the
processing behind
SessionFactory.getCurrentSession()
is now pluggable. To that end, a new
extension interface,
org.hibernate.context.CurrentSessionContext,
and a new configuration parameter,
hibernate.current_session_context_class,
have been added to allow pluggability
of the scope and context of defining
current sessions.
See the Javadocs for the
org.hibernate.context.CurrentSessionContext
interface for a detailed discussion of
its contract. It defines a single
method, currentSession(), by which
the implementation is responsible for
tracking the current contextual
session. Out-of-the-box, Hibernate
comes with three implementations of
this interface:
org.hibernate.context.JTASessionContext:
current sessions are tracked and
scoped by a JTA transaction. The
processing here is exactly the same as
in the older JTA-only approach. See
the Javadocs for details.
org.hibernate.context.ThreadLocalSessionContext:
current sessions are tracked by thread
of execution. See the Javadocs for
details.
org.hibernate.context.ManagedSessionContext:
current sessions are tracked by thread
of execution. However, you are
responsible to bind and unbind a
Session instance with static methods
on this class: it does not open,
flush, or close a Session.
(...)
There is no need to implement your own ThreadLocal-based solution nowadays, don't do that.
Question 3 : will HIbernateTemplate do for the simplest solution approach?
Well, the HibernateTemplate is not deprecated but it is not recommended anymore and I prefer to implement template-less DAOs:
public class ProductDaoImpl implements ProductDao {
private SessionFactory sessionFactory;
public void setSessionFactory(SessionFactory sessionFactory) {
this.sessionFactory = sessionFactory;
}
public Collection loadProductsByCategory(String category) {
return this.sessionFactory.getCurrentSession()
.createQuery("from test.Product product where product.category=?")
.setParameter(0, category)
.list();
}
}
Where the SessionFactory is injected by Spring. I suggest to read So should you still use Spring's HibernateTemplate and/or JpaTemplate?? for complete background and also the whole section 13.3. Hibernate in the Spring documentation on ORM Data Access.
Question 4 : what will be your choice if you were to implement a connection pool and tuning requirement for production server?
Hmm... What? I would never implement my connection pool but use the one from my application server. Maybe you should clarify this question.
Update: In production, I wouldn't use Hibernate built-in connection pool but configure Hibernate to use an application server provided JNDI datasource (and thus the application server connection pool). From the documentation:
3.3. JDBC connections
...
Here is an example hibernate.properties file for an application server provided JNDI datasource:
hibernate.connection.datasource = java:/comp/env/jdbc/test
hibernate.transaction.factory_class = \
org.hibernate.transaction.JTATransactionFactory
hibernate.transaction.manager_lookup_class = \
org.hibernate.transaction.JBossTransactionManagerLookup
hibernate.dialect = org.hibernate.dialect.PostgreSQLDialect
JDBC connections obtained from a JNDI datasource will automatically participate in the container-managed transactions of the application server.