Does someone know if GlassFish 5 has support to use global transactions with 2PC (XA protocol)? but without install extra tools.
I have looked for information in the page of GlassFish "The Open Source Java EE Reference Implementation" where I downloaded the app server (and in other pages) but I have not had luck.
I try doing transactions in two microservices that insert two values in the data base. I have configured the GlassFish's JNDI with "com.mysql.jdbc.jdbc2.optional.MysqlXADataSource" and it looks like working, but when I check the data base only is added a value of one service. (the global transactions with 2PC does not work). I begin to think that GlassFish does not have support for 2PC.
I have read that it can do it with tomcat, but i need to add tools like atomikos, bitronix, etc. The idea is can do it with glassfish with out install nothing more.
Regards.
Does someone know if GlassFish 5 has support to use global transactions with 2PC (XA protocol)? but without install extra tools.
Glassfish 5 supports transactions using XA datasources. You can create a program that executes transactions combining operations on multiple databases. For instance, you can create a transaction that performs operations into Oracle and IBM DB2 databases. If one of operations in the transaction fails, the other operations (in the same and in the other databases) will be not executed or rollbacked.
I try doing transactions in two microservices that insert two values in the data base. I have configured the GlassFish's JNDI with "com.mysql.jdbc.jdbc2.optional.MysqlXADataSource" and it looks like working, but when I check the data base only is added a value of one service.
If your program invokes a REST/web service in a transaction, the operations performed by the other REST/webservice do not join to the transaction. An error in the program will not produce a rollback in the operations performed by the already invoked REST/webservice.
Related
App1 interacts with App2(EJB application) using some client api exposed by App2.Uses CMT managed JTA transaction in Jboss.We are getting the UserTransaction from App2(Jboss) using JNDI look up.
App1 makes a call to App2 to insert data into DS2 using UserTransaction's begin() and commit().
App1 makes a call to DS1 using Hibernate JPA to insert data into DS1 using JPATransaction Manager.
Is it possible to wrap above both DB operations in a single transaction(Distributed transaction)
PFB the image which describes requirement
To do this it´s necessary to implement your own transactional resource, capable of joining an ongoing JTA transaction. See this answer as well for some guidelines, one way to see how this is done is to look at XA driver code for a database or JMS resource, and base yourself on that.
This is not trivial to do and a very rare use case, usually solved in practice by adopting an alternative design. One way would be to extract the necessary code from App2 into a jar library, and use it in Tomcat with a JTA transaction manager like Atomikos connected to two XA JTA datasources.
Another way is to flush the SQL statements into the database into tomcat and see if that works, before sending a synchronous call to JBoss, returning the result if the transaction in JBoss went through.
Depending on that commit/rollback in tomcat. This does not guarantee that will work 100% of the times (network failure etc) but might be acceptable depending on what the system does and the business consequences of a failed transaction.
Yet another way is to make the operation revertable in JBoss side and expose a compensate service used by tomcat in case errors are detected. For that and making the two servers JBoss you could take advantage of the JBoss Narayana engine, see also this answer.
Which way is better it depends on the use case, but implementing your own XA transactional services is a big undertaking, I would be simpler to change the design. The reason that very few projects are doing it is doing it is that it´s complex and there are simpler alternatives.
Tomcat is a webserver, so it does not support Global Transactions.
JBoss is an Application server, so it supports Global transactions.
If you have to combine both, you have to use JOTM or ATOMIKOS which acts as Trasaction Managers and commits or rollbacks.
Is there a way that I can use JDBC to target multiple databases when I execute statements (basic inserts, updates, deletes).
For example, assume both servers [200.200.200.1] and [200.200.200.2] have a database named MyDatabase, and the databases are exactly the same. I'd like to run "INSERT INTO TestTable VALUES(1, 2)" on both databases at the same time.
Note regarding JTA/XA:
We're developing a JTA/XA architecture to target multiple databases in the same transaction, but it won't be ready for some time. I'd like to use standard JDBC batch commands and have them hit multiple servers for now if its possible. I realize that it won't be transaction safe, I just wan't the commands to hit both servers for basic testing at the moment.
You need one connection per database. Once you have those, the standard auto commit/rollback calls will work.
You could try Spring; it already has transaction managers set up.
Even if you don't use Spring, all you have to do is get XA versions of the JDBC driver JARs in your CLASSPATH. Two phase commit will not work if you don't have them.
I'd wonder if replication using the database would not be a better idea. Why should the middle tier care about database clustering?
Best quick and dirty way for development is to use multiple database connections. They won't be in the same transaction since they are in different connection. I don't think this would be much of an issue if this is just for testing.
When your JTA/XA architecture is ready, just plug it into the already working code.
I need to develop some services and expose an API to some third parties.
In those services I may need to fetch/insert/update/delete data with some complex calculations involved(not just simple CRUD). I am planning to use Spring and MyBatis.
But the real challenge is there will be multiple DB nodes with same data(some external setup will takes care of keeping them in sync). When I got a request for some data I need to randomly pick one DB node and query it and return the results. If the selected DB is unreachable or having some network issues or some unknown problem then I need to try to connect to some other DB node.
I am aware of Spring's AbstractRoutingDataSource. But where to inject the DB Connection Retry logic? Will Spring handle transactions properly if I switch the dataSource dynamically?
Or should I avoid Spring & MyBatis out-of-the-box integration and do Transaction management by myself using MyBatis?
What do you guys suggest?
I propose to you using of NoSQL database like MongoDB. It is easy clustering. You can configure for example use 10 servers and do replication of data 3 times.
Thats mean that if 2 of your 10 servers will fails - you still got data save.
NoSQL databases is different comparing to RDBS, but they can give hight performance for clustering.
Also, there is no transactions support for NoSQL - you have to do it manually in case of financial operations.
Actually you should thing in different way when developing with NoSQL.
Yes, it will work. Get AbstractRoutingDataSource and code your own one. The only thing you cannot do is to change the target database while a transaction is running.
So what you have to do is putting the db retry code in the getConnection. If during the transaction that connection becomes invalid you should let it fail.
I have a requirement to produce a prototype (running in a J2EE compatible application server with MySQL) demonstrating the following
Demonstrate ability to distribute a transaction over multiple database located at different sites globally (Application managed data replication)
Demonstrate ability to write a transaction to a database from a choice of a number of database clusters located at multiple locations. The selection of which database to write to is based on user location. (Database managed data replication)
I have the option to choose either a Spring stack or a Java EE stack (EJB etc). It would be useful to know of your opinions as to which stack is better at supporting distributed transactions on multiple database clusters.
If possible, could you also please point me to any resources you think would be useful to learn of how to implement the above using either of the two stacks. I think seeing examples of both would help in understanding how they work and probably be in a better position to decide which stack to use.
I have seen a lot of sites by searching on Google but most seem to be outdated (i.e. pre EJB 3 and pre Spring 3)
Thanks
I would use the JavaEE stack the following way:
configure a XA DataSource for each database server
according to user's location, a Stateless EJB looks up the corresponding DataSource and get the connection from it
when broadcasting a transaction into all servers, a Stateless EJB has to iterate on all configured DataSources to execute one or more queries on them, but in a single transaction
In case of a technical failure, the transaction is rolled back on all concerned servers. In case of a business failure, the code can trigger a rollback thanks to context.setRollbackOnly().
That way, you benefit from JavaEE automatic distributed transaction demarcation first, and then you can use more complex patterns if you need to manage transaction manually.
BUT the more servers you have enlisted in your transaction, the longest the two-phase commit operation will last, moreover if you have high latency between systems. And I doubt MySQL is the best relational database implementation to do such complex distributed transactions.
I need improve the traceability in a Web Application that usually run on fixed db user. The DBA should have a fast access for the information about the heavy users that are degrading the database.
5 years ago, I implemented a .NET ORM engine which makes a log of user and the server using the DBMS_APPLICATION_INFO package. Using a wrapper above the connection manager with the following code:
DBMS_APPLICATION_INFO.SET_MODULE('" + User + " - " + appServerMachine + "','');
Each time that a connection get a connection from the pool, the package is executed to log the information in the V$SESSION.
Has anyone discover or implemented a solution for this problem using the Toplink or Hibernate? Is there a default implementation for this problem?
I found here a solutions as I implemented 5 years ago, but I'd like to know with anyone have a better solution and integrated with the ORM.
using DBMS_APPLICATION_INFO with Jboss
My application is above Spring, the DAO are implemented with JPA (using hibernate) and actually running directly in Tomcat, with plans to (next year) migrate to SAP Netwevare Application Server.
Thanks.
In Oracle 10g we can use DBMS_SESSION.SET_IDENTIFIER to uniquely identify a specific session. Oracle also provide a JDBC built-in to hook this into a connection pool. You will have to provide your own means of uniquely identifying a session, which will depend on your application.
Your DBA will then have enough information to identify the resource hungry session.
No DBA I know would be impressed with a huge text file generated from the middle tier.
If you want to know about queries that are costing a lot to run, you should go directly into your database server. There are monitoring tools for that, specific to each server. For example in PostgreSQL you would run SELECT * FROM pg_stat_activity as an admin to check each connection and what it's doing, how long it's been running, etc.
If you really want to/need to do it from the container, then maybe you can define an interceptor with Spring AOP to execute that statement you need before doing anything. Keep in mind that a database connection is not always used by the same application user, since you're using a pool.
You should be able to configure a logger (eg log4j) on your connection pool. You may need a custom appender to pull back the user ID.
Two points to consider:
On a busy system this will generate a big log file.
Frequent connections are not necessary an indication of behaviour that would degrade the DB.