Storing connection objects in session variable for database rollback - java

I have a web application spread across multiple modules in different JSP pages. Currently I use different oracle connection objects across these pages due to scope limitations. I now need to rollback database transactions done on any of the JSP pages in a central JSP display page on button click. But database rollback requires an associated connection object.
I thought of maintaining only one connection object, adding it to the list of session variables and dereferencing it when needed. By doing this, I can rollback database transactions done in any page from the central display page. Kindly let me know if the above is feasible.

What I'd try is to create a singleton class that provides you the needed connection. I'd try a pool but if not, just a singleton would work somehow. When you ask for rollback, recover the connection from the class and do the rollback (or ask the class to do that for you). If you prefere a pool, then iterate over the active connections and do rollback.
Opening connections inside jsp is not very good idea. Try to isolate the logic from the pages as much as possible. Instantiating classes could be a small step to accomplish this.
Hope this helps.

Related

Safely Wrapping a Connection Pool

I am trying to implement row level security so our application can enforce more stringent access control.
One of the technologies we are looking into is Oracle's Virtual Private Database, which allows row level security by basically augmenting all queries against specific tables with a where clause predicate. Since we are in a web environment, we need to set up a special context within Oracle, inside a single request's thread. We use connection pooling with a service account.
I started to look into Eclipse Link and Hibernate. Eclipse Link seems to have events that fit perfectly into this model.
This would involve us migrating from hibernate, which is not a problem, but we would then be bound to EL for these events.
Oracle seems to imply that they implement at the data source level in Web Logic product.
The context is set and cleared by the WebLogic data source code.
Question: Is it more appropriate to do this at the DataSource level with some series of events. What are the events or methods that I should pay the most attention too?
Added Question: How would I extend a connection pool to safely initialize an oracle context with some custom data? I am digging around in Apache, and it seems like extending BasicDataSource doesn't give me access to anything that would allow me to clean up the connection when Spring is done with it.
I need to set up a connection, and clean up a connection as the exit / enter the connection pool. I am hoping for an implementation that is so simple, no one can mess it up by breaking some delicate balance of products.
- Specifically we are currently using Apache Commons DBCP Basic Data Source
This would allow us to use various ways to connect to the database and still have our security enforced. But I don't see a great example or set of events to work with, and rolling my own security life cycle is never a good idea.
I eventually solved my problem by extending some of the Apache components.
First I extended org.apache.commons.pool.impl.GenericObjectPool and overrode both borrowObject() and returnObject(). I knew the type of the objects in the pool (java.sql.Connection) so I could safely cast and work with them.
Since for my case I was using Oracle VPD, I was able to set information in the Application context. I recommend you read about that in more depth. It is a little complicated and there are a lot of different options to hide or share data at various contexts level, and across RAC nodes. Start
In essence what I did was generate a nonce and use it to instantiate a session within oracle, and then set the access level of the user to a variable in that session, that the Oracle VPD policy would then read and use to do the row level filtering.
I instantiated and destroyed that information in my overridden borrowObject() and returnObject() The SQL I ran was something like this:
CallableStatement callStat =
conn.prepareCall("{call namespace.cust_ctx_pkg.set_session_id(" + Math.random() + ")}");
callStat.execute();
Note math.random() isn't a good nonce.
Next was to simply extend org.apache.commons.dbcp.BasicDataSource and set my object pool by overriding createConnectionPool(). Note that the way I did this disabled some functionality I did not need, so you may need to rewrite more or less than I did.
You can try any object level security mechanism for simplicity, like Spring Security ACL.
You will want to do this at the application layer. You will want a pre-commit hook and a post read hook.
The pre-commit hook is used to ensure that data from the client is being presented by a user authorized to modify that data. This prevents an unauthorized user from overwriting data that they shouldn't be able to access.
It's not intuitive, but the post read hook is used to keep the client from accessing data the user shouldn't be allowed to view. This happens post-view because this is being enforced at the application layer, not at the data layer. The application has no way to know if the caller is allowed to access the data until it's been retrieved from the data layer. In the post read hook you evaluate the credential on each row returned against the credential of the logged in user in order to determine whether or not access is allowed. If access is denied on any row then an exception would be raised and the data would not be returned to the client.
Application level security done in this way requires that you have a way to connect each row in a table to a permission/role required to access it and a way to evaluate a user's permissions on the server at runtime.
Hope that helps.
You will get better control by using one of the other Commons DBCP Datasources.
The Basic one is just that: basic :)
The ones in org.apache.commons.dbcp.datasources package gives you more fine-grained control.

How to manage Repository and UnitOfWork on top of JDBC

I'm wondering what would be the best way of implementing a UnitOfWork pattern on top of JDBC when using repository pattern.
I want to use UOW to commit or rollback commands from potentially multiple different repositories. So what I came up with is that the repositories get the connection they are going to execute commands on from the UOW object and let that manage committing or rolling it back.
Now the question is how shall the UOW manage connections? Shall it get one connection on start of a cycle and share it with all repositories when they call for a connection and at the end either commit or roll it back. Or shall it let the datasource manage the connections, and keep a reference to all the connections obtained in one session in order to be able to commit or rollback all of them?
Which way I use the best out of connection pooling and will have less trouble in managing connections?
Maybe its also worth noting that I use a single threaded approach, meaning a whole cycle (single UOW) is run on single thread and there is no fancy multithreading management required neither in UOW nor in repositories. However multiple cycles may run in parallel each on its own thread.

Connection pool for dynamic database connections

The problem setup is based on a webservice (Spring/Java, Tomcat7 and MySql) where every user gets their own database, hence each request needs their own connection. As all databases are created dynamically during runtime, configuring them statically before startup is not an option.
To optimise database connection usage, an implementation of a database connection pool would be great, right?
With Java/Spring: How would I create a connection pool for dynamic databases? I am a bit struck by the lack of clean options here!
Problem: Tomcat's Connection Pool (and as far as i understand C3P0 as well) treats each new DataSource instance as a whole new connection pool -> stack-reference
Is it a good idea to create a static datasource with a generic MySql connection (without specifing the database on connection) and use a connection pool with this datasource together with adapted SQL statements?
stack-reference
What about developing a custom persistent database based datasource pool? Any experience with performance here? Any advice? Any libraries that do that?
Or would it be feasable to workaround Tomcat's DataSource problem by creating Tomcat JNDI Datasources dynamically by manipulating it's context.xml dynamically from Java?
I can't believe that there aren't more plain/simple solutions for this. Grails/Hibernate struggles with this, Java/JDBC struggles with this, ... is it such a rare use-case to separate userdata on a user basis by creating user specific databases dynamically? If so, what would be a better setup?
EDIT
Another option is the suggestion from #M.Deinum to use a single configured datasource and dynamically hotswap it for the right connection ->M.Deinum Blog and stack-reference. How does that perform with a connection pool like the ones above?
I believe that HikariCP works without having to specify a single database.
Once the databases are created in runtime, you have to create the pools also in runtime. I am afraid the spring infrastructure is not giving you any help here, as it is tuned for the usual static use case.
I'd have a map of pools:
have a Map < connectionUrlString,List< c3poPool > > map
when requesting a connection, get the corresponding c3po pool from the map
and you can get the best of both worlds, since the real connection pool for each dynamically created database is handled by a c3po instance, but you can create new instances in runtime
This works as a low-level solution. If you want to go further, you can wrap this logic into a db connection provider, and register that as a "driver". This way any part of your application requests a new connection, you can just return one connection from the existing pools (and if a totally new connection is requested, create a new pool for that).
First than all, sorry for my english, i'm improving every day.
In my experience, I had a similar situation and it was resolve with spring framework. Let me explain you how you'd solve that question.
Make a spring config file with these characteristics:
a) A resource loader: This one is the responsible of load properties from configurations files or from database, those properties will be the appropriates to establish the database connection.
b) A pool database configuration parameterized with the properties that you'll load.
Create a locator class: In this class you'll need a HashMap
Use the multi context feature of spring: The idea is assign a code to every one connection that you establish and later load that connection like an application context with spring, then in the locator class, put in the map that context and use it as frequent as you need.
I think is you follow these steps, you can create dynamic pool or database connection as you want.

Session Management in a real time application

I have read different articles on Session Management and am aware of the different ways of implementing the same.
However below are few questions that I wanted to understand:
How session management is implemented in a real world application (e.g. cookies,url rewriting)?
What would be the steps and which is the best way to do the same?
What way should one prefer over another?
How is session management done wrt different data centers/clusters?
Thanks!
Its security risk to use cookies and url rewriting for sensitive data management. The best mechanism is to use http session in conjunction with https.
In real world scenarios, http session is used carefully to avoid bottle necks. simply rather than adding an entire object to session, an attribute which can be used to obtain an entity from database is carried over the session. bottom-line is that sessions need to be kept light weight.
session best practices include removing the session and invalidating it once its use is completed.
in EJB context, its always better to avoid Stateful session beans. If used, the bean has to be invalidated as the last invocation of the bean.

What is the 'best' way to do distributed transactions across multiple databases using Spring and Hibernate

I have an application - more like a utility - that sits in a corner and updates two different databases periodically.
It is a little standalone app that has been built with a Spring Application Context. The context has two Hibernate Session Factories configured in it, in turn using Commons DBCP data sources configured in Spring.
Currently there is no transaction management, but I would like to add some. The update to one database depends on a successful update to the other.
The app does not sit in a Java EE container - it is bootstrapped by a static launcher class called from a shell script. The launcher class instantiates the Application Context and then invokes a method on one of its beans.
What is the 'best' way to put transactionality around the database updates?
I will leave the definition of 'best' to you, but I think it should be some function of 'easy to set up', 'easy to configure', 'inexpensive', and 'easy to package and redistribute'. Naturally FOSS would be good.
The best way to distribute transactions over more than one database is: Don't.
Some people will point you to XA but XA (or Two Phase Commit) is a lie (or marketese).
Imagine: After the first phase have told the XA manager that it can send the final commit, the network connection to one of the databases fails. Now what? Timeout? That would leave the other database corrupt. Rollback? Two problems: You can't roll back a commit and how do you know what happened to the second database? Maybe the network connection failed after it successfully committed the data and only the "success" message was lost?
The best way is to copy the data in a single place. Use a scheme which allows you to abort the copy and continue it at any time (for example, ignore data which you already have or order the select by ID and request only records > MAX(ID) of your copy). Protect this with a transaction. This is not a problem since you're only reading data from the source, so when the transaction fails for any reason, you can ignore the source database. Therefore, this is a plain old single source transaction.
After you have copied the data, process it locally.
Setup a transaction manager in your context. Spring docs have examples, and it is very simple. Then when you want to execute a transaction:
try {
TransactionTemplate tt = new TransactionTemplate(txManager);
tt.execute(new TransactionCallbackWithoutResult(){
protected void doInTransactionWithoutResult(
TransactionStatus status) {
updateDb1();
updateDb2();
}
} catch (TransactionException ex) {
// handle
}
For more examples, and information perhaps look at this:
XA transactions using Spring
When you say "two different databases", do you mean different database servers, or two different schemas within the same DB server?
If the former, then if you want full transactionality, then you need the XA transaction API, which provides full two-phase commit. But more importantly, you also need a transaction coordinator/monitor which manages transaction propagation between the different database systems. This is part of JavaEE spec, and a pretty rarefied part of it at that. The TX coordinator itself is a complex piece of software. Your application software (via Spring, if you so wish) talks to the coordinator.
If, however, you just mean two databases within the same DB server, then vanilla JDBC transactions should work just fine, just perform your operations against both databases within a single transaction.
In this case you would need a Transaction Monitor (server supporting XA protocol) and make sure your databases supports XA also. Most (all?) J2EE servers comes with Transaction Monitor built in. If your code is running not in J2EE server then there are bunch of standalone alternatives - Atomicos, Bitronix, etc.
You could try Spring ChainedTransactionManager - http://docs.spring.io/spring-data/commons/docs/1.6.2.RELEASE/api/org/springframework/data/transaction/ChainedTransactionManager.html that supports distributed db transaction. This could be a better alternative to XA

Categories