Atomikos or JBossTS without JDBC - java

is it possible to implement one of the former Transaction Manager if the datasource is not callable via JDBC?
Edited
I want to create an addin for an existing application. My addin shall be responsible for logging of the read and write accesses of long run workflow transactions. My addin should be additionally responsible for caching variables in case that they are required - so that there should not neccessarily be a read/write operation everytime a variable is accessed.
The application is running in a Tomcat6 environment and I get the data by calling a plugin manager ( which holds gets data from the different datasources ).
Do you know any links which I could read - or maybe know of some existing solution?

Sounds like you still have not fully grasped the distinction between transaction manager and resource manager. Transactions managers like JBossTS drive resource managers like Oracle, MSSQL etc, via XAResources supplied by the RM's drivers.
You're not implementing a transaction manager - it's already implemented. You're implementing a new resource manager and using the existing transaction manager to drive it. Read the XA specification, then implement XAResource and enlist your resource with the transaction manager. As long as your impl is spec compliant the transaction manager will use it just the same way it does with the implementations supplied by database drivers or message queues.
Note that doing I/O to external (i.e. non-transactional) systems in ACID transaction scope is basically impossible. The best you can hope for is some form of compensation based model or 1PC behaviour with last resource commit optimization.

Related

Safely Wrapping a Connection Pool

I am trying to implement row level security so our application can enforce more stringent access control.
One of the technologies we are looking into is Oracle's Virtual Private Database, which allows row level security by basically augmenting all queries against specific tables with a where clause predicate. Since we are in a web environment, we need to set up a special context within Oracle, inside a single request's thread. We use connection pooling with a service account.
I started to look into Eclipse Link and Hibernate. Eclipse Link seems to have events that fit perfectly into this model.
This would involve us migrating from hibernate, which is not a problem, but we would then be bound to EL for these events.
Oracle seems to imply that they implement at the data source level in Web Logic product.
The context is set and cleared by the WebLogic data source code.
Question: Is it more appropriate to do this at the DataSource level with some series of events. What are the events or methods that I should pay the most attention too?
Added Question: How would I extend a connection pool to safely initialize an oracle context with some custom data? I am digging around in Apache, and it seems like extending BasicDataSource doesn't give me access to anything that would allow me to clean up the connection when Spring is done with it.
I need to set up a connection, and clean up a connection as the exit / enter the connection pool. I am hoping for an implementation that is so simple, no one can mess it up by breaking some delicate balance of products.
- Specifically we are currently using Apache Commons DBCP Basic Data Source
This would allow us to use various ways to connect to the database and still have our security enforced. But I don't see a great example or set of events to work with, and rolling my own security life cycle is never a good idea.
I eventually solved my problem by extending some of the Apache components.
First I extended org.apache.commons.pool.impl.GenericObjectPool and overrode both borrowObject() and returnObject(). I knew the type of the objects in the pool (java.sql.Connection) so I could safely cast and work with them.
Since for my case I was using Oracle VPD, I was able to set information in the Application context. I recommend you read about that in more depth. It is a little complicated and there are a lot of different options to hide or share data at various contexts level, and across RAC nodes. Start
In essence what I did was generate a nonce and use it to instantiate a session within oracle, and then set the access level of the user to a variable in that session, that the Oracle VPD policy would then read and use to do the row level filtering.
I instantiated and destroyed that information in my overridden borrowObject() and returnObject() The SQL I ran was something like this:
CallableStatement callStat =
conn.prepareCall("{call namespace.cust_ctx_pkg.set_session_id(" + Math.random() + ")}");
callStat.execute();
Note math.random() isn't a good nonce.
Next was to simply extend org.apache.commons.dbcp.BasicDataSource and set my object pool by overriding createConnectionPool(). Note that the way I did this disabled some functionality I did not need, so you may need to rewrite more or less than I did.
You can try any object level security mechanism for simplicity, like Spring Security ACL.
You will want to do this at the application layer. You will want a pre-commit hook and a post read hook.
The pre-commit hook is used to ensure that data from the client is being presented by a user authorized to modify that data. This prevents an unauthorized user from overwriting data that they shouldn't be able to access.
It's not intuitive, but the post read hook is used to keep the client from accessing data the user shouldn't be allowed to view. This happens post-view because this is being enforced at the application layer, not at the data layer. The application has no way to know if the caller is allowed to access the data until it's been retrieved from the data layer. In the post read hook you evaluate the credential on each row returned against the credential of the logged in user in order to determine whether or not access is allowed. If access is denied on any row then an exception would be raised and the data would not be returned to the client.
Application level security done in this way requires that you have a way to connect each row in a table to a permission/role required to access it and a way to evaluate a user's permissions on the server at runtime.
Hope that helps.
You will get better control by using one of the other Commons DBCP Datasources.
The Basic one is just that: basic :)
The ones in org.apache.commons.dbcp.datasources package gives you more fine-grained control.

Transactions in Enterprise level application

I am trying to understand the transactions, and more specifically i am doing it using Spring framework. Going through the material that i have (both internet and books), i see these terminologies:
Container managed transactions (CMT).
Bean Managed transactions (BMT).
Java transaction API (JTA).
Also, for a large enterprise level application, terminologies like "local" and "global" transactions too i have encountered.
What i have understood, is that global transactions is for the cases in which we are managing two or more different resouces (like One Oracle DB, other MySQL etc) and commits/rollback to them happen if both are success/failure. Local transactions is for when we have only one resource to manage (like only one DB connection to MySQL).
I have following doubts:
While writing a simple JDBC standalone program, we can code to commit/rollback. I believe this is example of local transaction, is this correct? And is this transaction taken care by "JDBC" library itself or the drivers?
What is CMT? What i understand is that Container takes care of the transaction management (like Jboss AS). If this is correct, how internally does it manages? Are CMT local or global? Do they use JDBC API's internally to manage the transactions?
What is BMT? This i am not able to understand. We can have application deployed in an App server (and i believe in those cases the transaction would be managed by Container) or i can write an standalone application in which i can use the transaction management code myself (try .. catch block etc). So, which category BMT falls?
While reading Spring framework, i read that Spring framework have its own way of managing the transaction. Down the line, i read Spring Framework uses JTA to manage transactions? I am confused about this. If JTA is API, do we have multiple vendors who implement JTA (like for JPA, we can have multiple vendors who provide the implementation e.g. Hibernate).
Is JTA for local or global transactions? When i write a simple JDBC program to manage transaction, does it use API's which comply JTA? Or is JTA totally different from transaction management which simple JDBC program uses?
Same for CMT, does CMT follow the rules which JTA has in place (the API's primarily)?
It would be great if you answer point-wise, i did search on net, my doubts are still not answered.
Concerning local/global transaction: by global I suppose you are talking about XA transaction (2 phase commits : http://en.wikipedia.org/wiki/X/Open_XA). This is just mandatory when you deal with multiple databases or transactionnal resources.
yes it's a "local transaction", it means only one database is
part of the transaction. The transaction is managed by the database
and controlled by JDBC in that case.
CMT : Container Managed: the container will detect the start and
the end of the transaction and will perform commit/rollback
depending on method return status (successfull return : commit,
exception : rollback). CMT rely on JTA to manage transactions on its
resources. Then it's up to the proper resource adapters (RA) to talk
to jdbc or any other drivers related to your EIS (data). Look at
http://docs.oracle.com/javaee/5/tutorial/doc/bncjx.html
BMT: it means it's up to the bean to control transaction
boundaries. It's pretty rare to use this kind of transaction
management these days. With BMT you control the UserTransaction
object, it's an error to control transaction boundaries directly
with JDBC. Bear in mind also that even if you are in BMT some
framework like JPA (not JTA) will invalidate current transaction on
any error even if you did not explicitely requested a rollback. It
means it's quite hard/dangerous/useless to use BMT.
JTA (I hope you did not mispelled JPA) is at another level: JTA
the API a resource adapter must implement to be part of a container
transaction. Apart from UserTransaction class (what you'll use in
BMT to control transaction boundaries) you have nothing to do with
JTA. There is no multiple implementation of JTA but there is multiple implementation of JTS (Java Transaction Service), each application server vendor implements their own JTS.
JTA is a API for framework designer, JTA impose a contract to the
Resource Adapter RA, and the RA will use JDBC or any other API to
deal with it's EIS (Enterprise Information Storage, let's call it
your DB). JTA works for XA and non XA transactions (if your RA supports XA transactions).
CMT uses JTA, but again JTA is a lowlevel contract between
components of the application server. Application designer should
not care about JTA.

Transaction Management in EJB

Recently I was asked a question which left me thinking..want to get the community views on the same question.
I have a CustomerEJB which has say a createCustomer method. My EJB is exposed as a web service and hence createCustomer is one of its operations.
When a request hits createCustomer, 2 operations need to be performed
An INSERT SQL query into the database which may be adding certain data into db that came in input request
creation of a text file, say .txt in the file system.
Now the question is I want to couple these two tasks into a transaction. If any one task fails, I rollback the other task as well.
Without mentioning any hot technologies, like Spring/Hibernate what is the approach I can follow for Transaction management
My thoughts:
1. I can use JTA, demarcate the transaction boundaries and perform commit and rollback accordingly. JDBC can be used for the SQL task
2. I can use DAOs
Inviting your kind suggestions/comments
You would need to wrap the file creating in a XA capable JCA connector (not sure whether there's a ready made one out there, a quick good only found this fsconnector which doesn't support transactions yet), and use an XA driver for your DB transaction (most DBs will will be able handle this) and then wrap your EJB in an XA transaction (should be straightforward).
As long as both resources can handle the XA transactions, you'll get the benefit of 2-phase commits, which is what you're after.

How to wrap an object change in my own transaction and incorporate it with Hibernate to JTA?

I have a web-app, which I deploy on Tomcat 6 and it uses Hibernate.
It receives messages on a JMS queue which trigger changes both to my DB, via Hibernate and to an Object of mine (Agent).
The web-requests also access the DB, via Hibernate, and access the shared object (there's a ConcurrentHashMap<AgentId,Agent> held by a singleton).
My problem is that I have a JMS message which changes several different Agents and several tables and I need the changes in the Agents to be available if and only if the DB transaction completed successfully. In addition I do not want to employ read locks as that is too much of a performance hazard for me.
I was thinking of somehow implementing the XAResource interface for my singleton and then use JTA to manage both my singleton and my Hibernate transaction.
What do you think? Does it sound reasonable? Am I way off?
If any additional details are needed please don't hesitate to ask
Ittai
Instead of implementing XAResource, you could use a transactional cache like EHCache which supports JTA since 2.0 (i.e. it can act as an XA resource and participate in a XA transaction alongside other XA resources).

What is the 'best' way to do distributed transactions across multiple databases using Spring and Hibernate

I have an application - more like a utility - that sits in a corner and updates two different databases periodically.
It is a little standalone app that has been built with a Spring Application Context. The context has two Hibernate Session Factories configured in it, in turn using Commons DBCP data sources configured in Spring.
Currently there is no transaction management, but I would like to add some. The update to one database depends on a successful update to the other.
The app does not sit in a Java EE container - it is bootstrapped by a static launcher class called from a shell script. The launcher class instantiates the Application Context and then invokes a method on one of its beans.
What is the 'best' way to put transactionality around the database updates?
I will leave the definition of 'best' to you, but I think it should be some function of 'easy to set up', 'easy to configure', 'inexpensive', and 'easy to package and redistribute'. Naturally FOSS would be good.
The best way to distribute transactions over more than one database is: Don't.
Some people will point you to XA but XA (or Two Phase Commit) is a lie (or marketese).
Imagine: After the first phase have told the XA manager that it can send the final commit, the network connection to one of the databases fails. Now what? Timeout? That would leave the other database corrupt. Rollback? Two problems: You can't roll back a commit and how do you know what happened to the second database? Maybe the network connection failed after it successfully committed the data and only the "success" message was lost?
The best way is to copy the data in a single place. Use a scheme which allows you to abort the copy and continue it at any time (for example, ignore data which you already have or order the select by ID and request only records > MAX(ID) of your copy). Protect this with a transaction. This is not a problem since you're only reading data from the source, so when the transaction fails for any reason, you can ignore the source database. Therefore, this is a plain old single source transaction.
After you have copied the data, process it locally.
Setup a transaction manager in your context. Spring docs have examples, and it is very simple. Then when you want to execute a transaction:
try {
TransactionTemplate tt = new TransactionTemplate(txManager);
tt.execute(new TransactionCallbackWithoutResult(){
protected void doInTransactionWithoutResult(
TransactionStatus status) {
updateDb1();
updateDb2();
}
} catch (TransactionException ex) {
// handle
}
For more examples, and information perhaps look at this:
XA transactions using Spring
When you say "two different databases", do you mean different database servers, or two different schemas within the same DB server?
If the former, then if you want full transactionality, then you need the XA transaction API, which provides full two-phase commit. But more importantly, you also need a transaction coordinator/monitor which manages transaction propagation between the different database systems. This is part of JavaEE spec, and a pretty rarefied part of it at that. The TX coordinator itself is a complex piece of software. Your application software (via Spring, if you so wish) talks to the coordinator.
If, however, you just mean two databases within the same DB server, then vanilla JDBC transactions should work just fine, just perform your operations against both databases within a single transaction.
In this case you would need a Transaction Monitor (server supporting XA protocol) and make sure your databases supports XA also. Most (all?) J2EE servers comes with Transaction Monitor built in. If your code is running not in J2EE server then there are bunch of standalone alternatives - Atomicos, Bitronix, etc.
You could try Spring ChainedTransactionManager - http://docs.spring.io/spring-data/commons/docs/1.6.2.RELEASE/api/org/springframework/data/transaction/ChainedTransactionManager.html that supports distributed db transaction. This could be a better alternative to XA

Categories