How to use a JTA transaction with two databases? - java

App1 interacts with App2(EJB application) using some client api exposed by App2.Uses CMT managed JTA transaction in Jboss.We are getting the UserTransaction from App2(Jboss) using JNDI look up.
App1 makes a call to App2 to insert data into DS2 using UserTransaction's begin() and commit().
App1 makes a call to DS1 using Hibernate JPA to insert data into DS1 using JPATransaction Manager.
Is it possible to wrap above both DB operations in a single transaction(Distributed transaction)
PFB the image which describes requirement

To do this it´s necessary to implement your own transactional resource, capable of joining an ongoing JTA transaction. See this answer as well for some guidelines, one way to see how this is done is to look at XA driver code for a database or JMS resource, and base yourself on that.
This is not trivial to do and a very rare use case, usually solved in practice by adopting an alternative design. One way would be to extract the necessary code from App2 into a jar library, and use it in Tomcat with a JTA transaction manager like Atomikos connected to two XA JTA datasources.
Another way is to flush the SQL statements into the database into tomcat and see if that works, before sending a synchronous call to JBoss, returning the result if the transaction in JBoss went through.
Depending on that commit/rollback in tomcat. This does not guarantee that will work 100% of the times (network failure etc) but might be acceptable depending on what the system does and the business consequences of a failed transaction.
Yet another way is to make the operation revertable in JBoss side and expose a compensate service used by tomcat in case errors are detected. For that and making the two servers JBoss you could take advantage of the JBoss Narayana engine, see also this answer.
Which way is better it depends on the use case, but implementing your own XA transactional services is a big undertaking, I would be simpler to change the design. The reason that very few projects are doing it is doing it is that it´s complex and there are simpler alternatives.

Tomcat is a webserver, so it does not support Global Transactions.
JBoss is an Application server, so it supports Global transactions.
If you have to combine both, you have to use JOTM or ATOMIKOS which acts as Trasaction Managers and commits or rollbacks.

Related

2PC transactions (cross transactions) in GlassFish 5

Does someone know if GlassFish 5 has support to use global transactions with 2PC (XA protocol)? but without install extra tools.
I have looked for information in the page of GlassFish "The Open Source Java EE Reference Implementation" where I downloaded the app server (and in other pages) but I have not had luck.
I try doing transactions in two microservices that insert two values in the data base. I have configured the GlassFish's JNDI with "com.mysql.jdbc.jdbc2.optional.MysqlXADataSource" and it looks like working, but when I check the data base only is added a value of one service. (the global transactions with 2PC does not work). I begin to think that GlassFish does not have support for 2PC.
I have read that it can do it with tomcat, but i need to add tools like atomikos, bitronix, etc. The idea is can do it with glassfish with out install nothing more.
Regards.
Does someone know if GlassFish 5 has support to use global transactions with 2PC (XA protocol)? but without install extra tools.
Glassfish 5 supports transactions using XA datasources. You can create a program that executes transactions combining operations on multiple databases. For instance, you can create a transaction that performs operations into Oracle and IBM DB2 databases. If one of operations in the transaction fails, the other operations (in the same and in the other databases) will be not executed or rollbacked.
I try doing transactions in two microservices that insert two values in the data base. I have configured the GlassFish's JNDI with "com.mysql.jdbc.jdbc2.optional.MysqlXADataSource" and it looks like working, but when I check the data base only is added a value of one service.
If your program invokes a REST/web service in a transaction, the operations performed by the other REST/webservice do not join to the transaction. An error in the program will not produce a rollback in the operations performed by the already invoked REST/webservice.

Java EE / EJB vs Spring for Distributed Transaction management with multiple DB Clusters

I have a requirement to produce a prototype (running in a J2EE compatible application server with MySQL) demonstrating the following
Demonstrate ability to distribute a transaction over multiple database located at different sites globally (Application managed data replication)
Demonstrate ability to write a transaction to a database from a choice of a number of database clusters located at multiple locations. The selection of which database to write to is based on user location. (Database managed data replication)
I have the option to choose either a Spring stack or a Java EE stack (EJB etc). It would be useful to know of your opinions as to which stack is better at supporting distributed transactions on multiple database clusters.
If possible, could you also please point me to any resources you think would be useful to learn of how to implement the above using either of the two stacks. I think seeing examples of both would help in understanding how they work and probably be in a better position to decide which stack to use.
I have seen a lot of sites by searching on Google but most seem to be outdated (i.e. pre EJB 3 and pre Spring 3)
Thanks
I would use the JavaEE stack the following way:
configure a XA DataSource for each database server
according to user's location, a Stateless EJB looks up the corresponding DataSource and get the connection from it
when broadcasting a transaction into all servers, a Stateless EJB has to iterate on all configured DataSources to execute one or more queries on them, but in a single transaction
In case of a technical failure, the transaction is rolled back on all concerned servers. In case of a business failure, the code can trigger a rollback thanks to context.setRollbackOnly().
That way, you benefit from JavaEE automatic distributed transaction demarcation first, and then you can use more complex patterns if you need to manage transaction manually.
BUT the more servers you have enlisted in your transaction, the longest the two-phase commit operation will last, moreover if you have high latency between systems. And I doubt MySQL is the best relational database implementation to do such complex distributed transactions.

Hibernate L2 cache. Read-write or transactional cache concurrency strategy on cluster?

I’m trying to figure out which cache concurrency strategy should I use for my application (for entity updates, in particular). The application is a web-service developed using Hibernate, is deployed on Amazon EC2 cluster and works on Tomcat, so no application server there.
I know that there are nonstrict-read-write \ read-write and transactional cache concurrency strategies for data that can be updated and there are mature, popular, production ready 2L cache providers for Hibernate: Infinispan, Ehcache, Hazelcast.
But I don't completely understand the difference between the transactional and read-write caches from the Hibernate documentation. I thought that the transactional cache is the only choice for a cluster application, but now (after reading some topics), I'm not so sure about that.
So my question is about the read-write cache. Is it cluster-safe? Does it guarantee data synchronization between database and the cache as well as synchronization between all the connected servers? Or it is only suitable for single server applications and I should always prefer the transactional cache?
For example, if a database transaction that is updating an entity field (first name, etc.) fails and has been rolled back, will the read-write cache discard the changes or it will just populate the bad data (the updated first name) to all the other nodes?
Does it require a JTA transaction for this?
The Concurrency strategy configuration for JBoss TreeCache as 2nd level Hibernate cache topic says:
'READ_WRITE` is an interesting combination. In this mode Hibernate
itself works as a lightweight XA-coordinator, so it doesn't require a
full-blown external XA. Short description of how it works:
In this mode Hibernate manages the transactions itself. All DB
actions must be inside a transaction, autocommit mode won't work.
During the flush() (which might appear multiple time during
transaction lifetime, but usually happens just before the commit)
Hibernate goes through a session and searches for
updated/inserted/deleted objects. These objects then are first saved
to the database, and then locked and updated in the cache so
concurrent transactions can neither update nor read them.
If the transaction is then rolled back (explicitly or because of some
error) the locked objects are simply released and evicted from the
cache, so other transactions can read/update them.
If the transaction is committed successfully, then the locked objects are
simply released and other threads can read/write them.
Is there some documentation how this works in a cluster environment?
It seems that the transactional cache works correctly for this, but requires JTA environment with a standalone transaction manager (such as JBossTM, Atomikos, Bitronix), XA datasource and a lot of configuration changes and testing. I managed to deploy this, but still have some issues with my frameworks. For instance, Google Guice IoC does not support JTA transactions and I have to replace it with Spring or move the service to some application server and use EJB.
So which way is better?
Thanks in advance!
Summary of differences
NonStrict R/w and R/w are both asynchronous strategies, meaning they
are updated after the transaction is completed.
Transactional is
obviously synchronous and is updated within the transaction.
Nonstrict R/w never locks an entity, so there's always the chance of
a dirty read.
Read-Write always soft locks an entity, so any
simultaneous access is sent to the database. However, there is a
remote chance that R/w might not produce Repeatable Read isolation.
The best way to understand the differences between these strategies
is to see how they behave during the course of the Insert, update or
delete operations.
You can check out my post
here
which describes the differences in further detail.
Feel free to comment.
So far I've only seen clustered 2LC working with transactional cache modes. That's precisely what Infinispan does, and in fact, Infinispan has so far stayed away from implementing the other cache concurrency modes. To lighten the transactional burden, Infinispan integrates via transaction synchronizations with Hibernate as opposed to XA.

Spring remote services with a transaction context

I have the following scenario:
I have an interface-server which listens on a queue and receives messages from the "outside world". This server then calls a "internal", business, service which in turn calls other services and so on.
These services can each reside on a different machine, and can be clustered for that matter.
I need the notion of a transaction to span across these services and machines.
My development stack includes Spring (3.0.5) and JPA 2.0(Hibernate in background) on a J2SE environment.
Can I acheive this without an app-server? Assuming I plug-in an external JTA transaction-manager (like atomikos for example)
We've chosen to go with Spring for many reasons the most important ones were the service abstractions, intensive DI and the ability to work without a heavy app-server. I know we can use spring in an app-server but if someone is to recommend this I'd like to hear why this should be done, assuming I can forefit spring and go all app-server.
BTW, just to reassure anyone reading this post: Yes, we've thought of the problematic issues of a distributed transaction but we still think we will need such a transaction as this is the business logic of the service and it will need to be across machine as some of the services will be under a lot of pressure.
Thanks in advance,
Ittai
We ended up using JBoss with Spring.
JBoss indeed supplied the distributed transactions that were needed while Spring contained all DI and such.
We still kept spring as we felt its IOC was cleaner and more comfortable.
It is possible we should have used CDI in jboss but that was not on our radar.
We use Spring 3 and Atomikos for distributed transactions (xa) on apache tomcat and oracle databases in production, so this for us a very usefull setup. Have a look at the atomicos spring integration example:
http://www.atomikos.com/Documentation/SpringIntegration

Besides EAR and EJB, what do I get from a Java EE app server that I don't get in a servlet container like Tomcat?

We use Tomcat to host our WAR based applications. We are servlet container compliant J2EE applications with the exception of org.apache.catalina.authenticator.SingleSignOn.
We are being asked to move to a commercial Java EE application server.
The first downside to changing that
I see is the cost. No matter what
the charges for the application
server, Tomcat is free.
Second is the complexity. We don't
use either EJB nor EAR features (of
course not, we can't), and have not missed them.
What then are the benefits I'm not seeing?
What are the drawbacks that I haven't mentioned?
Mentioned were...
JTA - Java Transaction API - We
control transaction via database
stored procedures.
JPA - Java Persistence API - We use
JDBC and again stored procedures to
persist.
JMS - Java Message Service - We use
XML over HTTP for messaging.
This is good, please more!
When we set out with the goal to Java EE 6 certify Apache Tomcat as Apache TomEE, here are some of the gaps we had to fill in order to finally pass the Java EE 6 TCK.
Not a complete list, but some highlights that might not be obvious even with the existing answers.
No TransactionManager
Transaction Management is definitely required for any certified server. In any web component (servlet, filter, listener, jsf managed bean) you should be able to get a UserTransaction injected like so:
#Resource UserTransaction transaction;
You should be able use the javax.transaction.UserTransaction to create transactions. All the resources you touch in the scope of that transaction should all be enrolled in that transaction. This includes, but is not limited to, the following objects:
javax.sql.DataSource
javax.persistence.EntityManager
javax.jms.ConnectionFactory
javax.jms.QueueConnectionFactory
javax.jms.TopicConnectionFactory
javax.ejb.TimerService
For example, if in a servlet you start a transaction then:
Update the database
Fire a JMS message to a topic or queue
Create a Timer to do work at some later point
.. and then one of those things fails or you simply choose to call rollback() on the UserTransaction, then all of those things are undone.
No Connection Pooling
To be very clear there are two kinds of connection pooling:
Transactionally aware connection pooling
Non-Transactionally aware connection pooling
The Java EE specs do not strictly require connection pooling, however if you have connection pooling, it should be transaction aware or you will lose your transaction management.
What this means is basically:
Everyone in the same transaction should have the same connection from the pool
The connection should not be returned to the pool until the transaction completes (commit or rollback) regardless if someone called close() or any other method on the DataSource.
A common library used in Tomcat for connection pooling is commons-dbcp. We wanted to also use this in TomEE, however it did not support transaction-aware connection pooling, so we actually added that functionality into commons-dbcp (yay, Apache) and it is there as of commons-dbc version 1.4.
Note, that adding commons-dbcp to Tomcat is still not enough to get transactional connection pooling. You still need the transaction manager and you still need the container to do the plumbing of registering connections with the TransactionManager via Synchronization objects.
In Java EE 7 there's talk of adding a standard way to encrypt DB passwords and package them with the application in a secure file or external storage. This will be one more feature that Tomcat will not support.
No Security Integration
WebServices security, JAX-RS SecurityContext, EJB security, JAAS login and JAAC are all security concepts that by default are not "hooked up" in Tomcat even if you individually add libraries like CXF, OpenEJB, etc.
These APIs are all of course suppose to work together in a Java EE server. There was quite a bit of work we had to do to get all these to cooperate and to do it on top of the Tomcat Realm API so that people could use all the existing Tomcat Realm implementations to drive their "Java EE" security. It's really still Tomcat security, it's just very well integrated.
JPA Integration
Yes, you can drop a JPA provider into a .war file and use it without Tomcat's help. With this approach you will not get:
#PersistenceUnit EntityManagerFactory injection/lookup
#PersistenceContext EntityManager injection/lookup
An EntityManager hooked up to a transactional aware connection pool
JTA-Managed EntityManager support
Extended persistence contexts
JTA-Managed EntityManager basically mean that two objects in the same transaction that wish to use an EntityManager will both see the same EntityManager and there is no need to explicitly pass the EntityManager around. All this "passing" is done for you by the container.
How is this achieved? Simple, the EntityManager you got from the container is a fake. It's a wrapper. When you use it, it looks in the current transaction for the real EntityManager and delegates the call to that EntityManager. This is the reason for the mysterious EntityManager.getDelegate() method, so users can get the real EntityManager if they want and make use of any non-standard APIs. Do so with great care of course and never keep a reference to the delegate EntityManager or you will have a serious memory leak. The delegate EntityManager will normally be flushed, closed, cleaned up and discarded when a transaction completes. If you're still holding onto a reference, you will prevent garbage collection of that EntityManager and possibly all the data it holds.
It's always safe to hold a reference to a EntityManager you got from the container
Its not safe to hold a reference to EntityManager.getDelegate()
Be very careful holding a reference to an EntityManager you created yourself via an EntityManagerFactory -- you are 100% responsible for its management.
CDI Integration
I don't want to over simplify CDI, but I find it is a little too big and many people have not take a serious look -- it's on the "someday" list for many people :) So here is just a couple highlights that I think a "web guy" would want to know about.
You know all the putting and getting you do in a typical webapp? Pulling things in and out of HttpSession all day? Using String for the key and continuously casting objects you get from the HttpSession. You've probably go utility code to do that for you.
CDI has this utility code too, it's called #SessionScoped. Any object annotated with #SessionScoped gets put and tracked in the HttpSession for you. You just request the object to be injected into your Servlet via #Inject FooObject and the CDI container will track the "real" FooObject instance in the same way I described the transactional tracking of the EntitityManager. Abracadabra, now you can delete a bunch of code :)
Doing any getAttribute and setAttribute on HttpServletRequest? Well, you can delete that too with #RequestScoped in the same way.
And of course there is #ApplicationScoped to eliminate the getAttribute and setAttribute calls you might be doing on ServletContext
To make things even cooler, any object tracked like this can implement a #PostConstruct which gets invoked when the bean gets created and a #PreDestroy method to be notified when said "scope" is finished (the session is done, the request is over, the app is shutting down).
CDI can do a lot more, but that's enough to make anyone want to re-write an old webapp.
Some picky things
There are some things added in Java EE 6 that are in Tomcats wheelhouse that were not added. They don't require big explanations, but did account for a large chunk of the "filling in the gaps".
Support for #DataSourceDefinition
Support for Global JNDI (java:global, java:app, java:module)
Enum injection via #Resource MyEnum myEnum and
Class injection via #Resource Class myPluggableClass and
Support for #Resource(lookup="foo")
Minor points, but it can be incredibly useful to define DataSource in the app in a portable way, share JNDI entries between webapps, and have the simple power to say "look this thing up and inject it"
Conclusion
As mentioned, not a complete list. No mention of EJB, JMS, JAX-RS, JAX-WS, JSF, Bean Validation and other useful things. But at least some idea of the things often overlooked when people talk about what Tomcat is and is not.
Also be aware that what you might have thought of as "Java EE" might not match the actual definition. With the Web Profile, Java EE has shrank. This was deliberately to address "Java EE is too heavy and I don't need all that".
If you cut EJB out of the Web Profile, here's what you have left:
Java Servlets
Java ServerPages (JSP)
Java ServerFaces (JSF)
Java Transaction API (JTA)
Java Persistence API (JPA)
Java Contexts and Dependency Injection (CDI)
Bean Validation
It's a pretty darn useful stack.
Unless you want EJB proper, you don't need a full stack J2EE server (commercial or not).
You can have most J2EE features (such as JTA, JPA, JMS, JSF) with no full stack J2EE server. The only benefit of a full stack j2ee is that the container will manage all these on your behalf declaratively. With the advent of EJB3, if you need container managed services, using one is a good thing.
You can also have no cost full stack server such as Glasfish, Geronimo or JBoss.
You can also run embedded j2ee container managed services with embedded Glasfish for example, right inside Tomcat.
You may want an EJB container if you want to use session beans, message beans, timer beans nicely managed for you, even with clustering and fail over.
I would suggest to the management to consider upgrades based on feature need. Some of these EJB containers might just well use embedded Tomcat as their webserver so what gives!
Some managers just like to pay for things. Ask them to consider a city shelter donation or just go for BEA.
If you are being asked to move to a commercial J2EE server, the reasons may have nothing to do with the J2EE stack but with non-technical considerations.
One thing that you do get with a commercial J2EE offering that you don't get with Tomcat is technical support.
This may not be a consideration for you, depending on the service levels your web applications are supposed to meet. Can your applications be down while you try and figure out a problem with Tomcat, or will that be a major problem?
Cost isn't necessarily a downside as there a few free J2EE servers, e.g. JBoss and Glassfish.
Your question assumes that (J2EE = Servlet + EJB + EAR) and therefore, there's no point in using anything more than a Servlet container if you're not using EJB or EAR. This is simply not the case, J2EE includes a lot more than this. Examples include:
JTA - Java transaction API
JPA - Java persistence API
JMS - Java messaging specification
JSF - technology for constructing user interfaces out of components
Cheers,
Donal
In truth, with the vast array of packages and libraries available, there's little an EJB container provides that can't be added to a modern servlet container (ala Tomcat). So, if you ever wanted any of those features, you can get them "ala carte" so to speak with the cost being the process of integrating that feature in to your app.
If you're not "missing" any of these features now, then from a practical standpoint, you probably don't need them.
That all said, the modern EJB containers are really nice, and come with all of those services pre-integrated, making them, somewhat, easier to use should you ever want them. Sometimes having the feature nearby and handy is enough to make someone explore it for its potential in their application, versus seeing the integration process of a feature as a hurdle to adoption.
With the quality of the free EJB containers, it's really hard to imagine how buying one can be at all useful, especially given that you have no real demand for one at the moment.
However, I do encourage you to actually get one and play around with it and explore the platform. Glassfish is very easy to get started with and very good, and should easily take your WARs as is (or with very minor tweaks).
As a rule when it comes between running Tomcat vs an EJB container the question is really why NOT use one? Speaking specifically for Glassfish, I find it easier to use than Tomcat, and It's primary difference is that it can have a moderately larger memory footprint (particularly for a small application) than Tomcat, but on a large application you won't even notice that. For me, the memory hit isn't a big deal, for others it may be an issue.
And it gives me a single source of all this nice functionality without having to crawl the net for a 3rd party option.

Categories