How to intercept JDBC queries with Hibernate/Spring/Tomcat? - java

I'm trying to implement the solution outlined in this answer. The short of it is: I want to set the role for each database connection in order to provide better data separation for different customers. This requires intercepting JDBC queries or transactions, setting the user before the query runs and resetting it afterwards. This is mainly done to comply with some regulatory requirements.
Currently I'm using Tomcat and Tomcat's JDBC pool connecting to a PostgreSQL database. The application is built with Spring and Hibernate. So far I couldn't find any point for intercepting the queries.
I tried JDBC interceptors for Tomcat's built in pool but they have to be global and I need to access data from my Web appliation in order to correlate requests to database users. As far as I see, Hibernate's interceptors work only on entities which is too high level for this use case.
What I need is something like the following:
class ConnectionPoolCallback {
void onConnectionRetrieved(Connection conn) {
conn.execute("SET ROLE " + getRole()); // getRole is some magic
}
void onConnectionReturned(Connection conn) {
conn.execute("RESET ROLE");
}
}
And now I need a place to register this callback... Does anybody have any idea how to implement something like this?

Hibernate 4 has multitenancy support. For plain sql you will need datasource routing which I believe spring has now or is an addon.
I would not mess ( ie extend) the pool library.

Option 1:
As Adam mentioned, use Hibernate 4's multi-tenant support. Read the docs on Hibernate multi-tenancy and then implement the MultiTenantConnectionProvider and CurrentTenantIdentifierResolver interfaces.
In the getConnection method, call SET ROLE as you've done above. Although it's at the Hibernate level, this hook is pretty close in functionality to what you asked for in your question.
Option 2:
I tried JDBC interceptors for Tomcat's built in pool but they have to
be global and I need to access data from my Web appliation in order to
correlate requests to database users.
If you can reconfigure your app to define the connection pool as a Spring bean rather than obtain it from Tomcat, you can probably add your own hook by proxying the data source:
<!-- I like c3p0, but use whatever pool you want -->
<bean id="actualDataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource">
<property name="jdbcUrl" value="${db.url}"/>
<property name="user" value="${db.user}" />
.....
<!-- uses the actual data source. name it "dataSource". i believe the Spring tx
stuff looks for a bean named "dataSource". -->
<bean id="dataSource" class="com.musiKk.RoleSettingDSProxy">
<property name="actualDataSource"><ref bean="actualDataSource" /></property>
</bean>
<bean id="sessionFactory" class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">
<property name="dataSource"><ref bean="dataSource" /></property>
....
And then build com.musiKk.RoleSettingDSProxy like this:
public class RoleSettingDSProxy implements DataSource {
private DataSource actualDataSource;
public Connection getConnection() throws SQLException {
Connection con = actualDataSource.getConnection();
// do your thing here. reference a thread local set by
// a servlet filter to get the current tenant and set the role
return con;
}
public void setActualDataSource(DataSource actualDataSource) {
this.actualDataSource = actualDataSource;
}
Note that I haven't actually tried option 2, it's just an idea. I can't immediately think of any reason why it wouldn't work, but it may unravel on you for some reason if you try to implement it.

One solution that comes to mind is to utilize the Hibernate listeners/callbacks. But do beware that is very low level and quite error-prone. I use it myself to get a certain degree of automated audit logging going; it was not a pretty development cycle to get it to work reliably. unfortunately I can't share code since I don't own it.
http://docs.jboss.org/hibernate/entitymanager/3.6/reference/en/html/listeners.html

Related

Spring Datasource transaction manager: transactionality across multiple instances of an application

Given all of the DB operations I'm performing on an Oracle datasource (using JDBCTemplate) are executed using a transaction template that uses a Spring Datasource TransactionManager,
If multiple copies of my application receive requests to perform database operations on the same datasource, will the operations still be transactional?
If another programmer connects to the same data source using a different library, will the operations performed here still be transactional?
To illustrate what exactly it is I'm doing:
val txTemplate = new TransactionTemplate(txManager, txAttribute)
txTemplate.execute(func)
where func is the function that performs the actual calls to JDBCtemplate, txManager is the transaction manager, and txAttribute is a DefaultTransactionAttribute where I define isolation, propagation, timeouts etc.
The transaction manager is a singleton defined in Spring that takes my datasource as an argument.
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<constructor-arg ref="dataSource"/>
</bean>
<bean id="dataSource" class="oracle.jdbc.pool.OracleConnectionPoolDataSource">
...
</bean>
Note:
As I am writing this in Scala, I have implicits defined that wrap the function func inside a TransactionCallback like so:
implicit def txCallbackImplicit[T](func: => T): TransactionCallback[T] = {
new TransactionCallback[T] {
def doInTransaction(status: TransactionStatus) = func.asInstanceOf[T]
}
}
So, txTemplate.execute(func) is actually callingtxTemplate.execute(new TransactionalCallBack[T] {...}`. This allows me to declare a method as transactional like so:
def foo = transactional() {
//jdbcTemplate operations
}
Transactions are implemented by the database (Oracle in your case), not by spring. Spring hides it very well behind many classes but essentially it just calls JDBC connection methods (setAutoCommit, commit and rollback) at the right times.
What data you see inside a transaction (no matter if it is part of your application or someone's else) depends on transaction isolation level (google it ;)
If multiple copies of my application receive requests to perform
database operations on the same datasource, will the operations still
be transactional?
The transactional behavior is not controlled by the datasource itself. The datasource is responsible to produce connections while the TransactionManager is responsible to manage transaction boundaries. If you propagate the transaction to all operations, then the TransactionManager will delimit them into the same transaction. In fact, it's possible to have distributed transactions (using two phase commit) over distinct datasources.
If another programmer connects to the same data source using a
different library, will the operations performed here still be
transactional?
The client cannot control the service provider transaction.

How to create multiple database connections for different databases in java

I have an application which uses four databases in different geographical locations. All the databases contains same tables and only the database name is different according to the location.
I have to create some reports in my application which uses data from each database. What would be the proper way to create those database connection from a java application and is there a suitable design pattern for this task which I could use?
As you have not tagged your question with any of this, hibernate, JPA, ORM, I assume you are dealing with plain JDBC.
Having said that, I suggest you to have a DAO layer to deal with underlying databases, and leave the connection details to specific implementations. You can configure your connection strings in some .properties files, lets say.
[Complement]
You can also make use of DAO factory, an implementation of Abstract Factory or Factory Method pattern, whichever suits here.
[Links]
A very fine implementation of DAO and DAO Factory, by BalusC
Core J2EE Patterns -- arguably dated but might provide some idea.
There are multiple ways you can achieve this:
If you are using any Java EE container which supports distributed transaction then you can use there functionality.
If you are with plain JDBC then you will have to maintain your own connection for every database.
For JDBC:
Provide all connection details
Have an Facade which gives you desired object by calling a abstract generic DAO.
Have a factory which creates dao based on connection.
Use ORM tools like Hibernate, where you can use configuration for multiple database. Tutorial.
If you are using Spring, then you can configure one datasource per database. Docs
Design Patterns:
Facade Pattern - for hiding the complexity and multiple database usage.
Factory - In case you manage the database connection yourself.
Singleton - For datasources
You can handle multiple connections easily using a ORM tool like Hibernate.. You can specify each connection in a separate configuration file and instantiate the required connection by getting a new session factory each time.
Other way would be to use datasource and JNDI : Java connecting to multiple databases
I think you can use a combination of Factory pattern and Singleton pattern for the purpose.
The Ideal way to achieve this is by using a multi-dimensional system like OLAP. But see if you can create a view out of those databases. Then you just need to query the view (i.e. just a single database connection). Also you can still use multiple database connections if you want.
is very easy :)
1.Create a Data Source to try connection to DB
public DataSource getDataSource(String db) throws Exception {
DataSource dt = null;
InitialContext ic = null;
try {
if(db.trim().equals("you_database_name")) {
dt = (DataSource)ic.lookup("jdbc/connection_name");
} else if(db.trim().equals("you_database_name")) {
dt = (DataSource) ic.lookup("jdbc/connection_name");
}
return dt;
} catch(NamingException n) {
throw new Exception("Err getDataSource (ServiceLocator) NamingException - " + n.getMessage());
}
2.Create a class DataBase, remember close all connection in this point.
public class DataBases {
public YouNameDataSourceClass dataSrc;
public DataBases() throws Exception {
super();
dataSrc = new YouNameDataSourceClass.getDataSource();
}
public Connection getConnectionAS400() throws Exception {
return locator.getDataSource("you_database_name").getConnection();
}
public Connection getConnectionOracle() throws Exception {
return locator.getDataSource("you_database_name").getConnection();
}
public Connection getConnectionSQLServer() throws Exception {
return locator.getDataSource("you_database_name").getConnection();
}
}
Good look.
Assuming you are using Spring MVC with Hibernate with XML configurations, follow these steps:
Create beans of all the databases in your spring-servlet file.
<bean id="dataSource1" Class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="com.microsoft.sqlserver.jdbc.SQLServerDriver" />
<property name="url" value="jdbc:sqlserver://localhost:1433;databaseName=database1"/>
<property name="username" value="abc" />
<property name="password" value="abc#123" />
</bean>
Create sessionFactory beans of all the databases you want in the Spring-servlet file.
<bean id="datasource1SessionFactory" class="org.springframework.orm.hibernate5.LocalSessionFactoryBean">
<property name="dataSource"
ref="database1"/>
<property name="packagesToScan"
value="com.id4.iprod.entity"/>
<property name="hibernateProperties">
<props>
<prop key="hibernate.hbm2ddl.auto">
</prop>
<prop key="hibernate.dialect">
org.hibernate.dialect.SQLServer2012Dialect
</prop>
</props>
</property>
</bean>
Now you just need to open session of the database you want to in DAO and access the desired results from desired database.
Session datasource1= this.datasource1SessionFactory.openSession();

Spring scoped-proxy transactions are fine via JPA but not commiting via JDBC

I have a situation where I have to handle multiple clients in one app and each client has separate database. To support that I'm using Spring custom scope, quite similar to the built in request scope. A user authenticates in each request and can set context client ID based passed credentials. The scoping itself seems to be working properly.
So I used my custom scope to create a scoped-proxy for my DataSource to support a diffrent database per client. And I get connections to proper databases.
Than I created a scoped-proxy for EntityManagerFactory to use JPA. And this part also looks OK.
Than I added a scoped-proxy for PlatformTransactionManager for declarative transaction management. I use #Transactional on my service layer and it gets propagated nicely to my SpringData powered repository layer.
All is fine and works correctly as long a s I use only JPA. I can even switch context to a diffrent client within the request (I use ThreadLocals under the hood) and transactions to both databases are handled correctly.
The problems start when I try to use JDBCTempate in one of my custom repositiries. Than at first glance all looks OK too, as no exceptions are thrown. But when I check the database for the objects I thought I inserted with my custom JDBC-based repository the're not there!
I know for sure I can use JPA and JDBC together by declaring only JpaTransactionManager and passing both the DataSource and EntityManagerFactory to it - I checked it and without the scoped-proxies and it works.
So the question is how to make JDBC work together with JPA using the JpaTransactionManager when I have scoped-proxied the DataSource, EntityManagerFactory and PlatformTransactionManager beans? I remind that using only JPA works perfectly, but adding plain JDBC into the mix is not working.
UPDATE1: And one more thing: all readonly (SELECT) operations work fine with JDBC too - only writes (INSERT, UPDATE, DELETE) end up not commited or rolledback.
UPDATE2: As #Tomasz suggested I've removed scoped proxy from EntityManagerFactory and PlatformTransactionManager as those are indeed not needed and provide more confusion than anything else.
The real problem seems to be switching the scope context within a transaction. The TransactionSynchronizationManager bounds transactional resources (i.e. EMF or DS) to thread at transaction start. It has the ability to unwrap the scoped proxy, so it binds the actual instance of the resource from the scope active at the time of starting a transaction. Then when I change the context within a transaction it all gets messed up.
It seems like I need to suspend the active transaction and store aside the current transaction context to be able to clear it upon entering another scope to make Spring think it's not inside a transaction any more and to force it create one for the new scope when needed. And then when leaving the scope I'd have to restore the previously suspended transaction. Unfortunatelly I was unable to come up with a working implementation yet. Any hints appreciated.
And below is some code of mine, but it's pretty standard, except for the scoped-proxies.
The DataSource:
<!-- provides database name based on client context -->
<bean id="clientDatabaseNameProvider"
class="com.example.common.spring.scope.ClientScopedNameProviderImpl"
c:clientScopeHolder-ref="clientScopeHolder"
p:databaseName="${base.db.name}" />
<!-- an extension of org.apache.commons.dbcp.BasicDataSource that
uses proper database URL based on database name given by above provider -->
<bean id="jpaDataSource" scope="client"
class="com.example.common.spring.datasource.MysqlDbInitializingDataSource"
destroy-method="close"
p:driverClassName="${mysql.driver}"
p:url="${mysql.url}"
p:databaseNameProvider-ref="clientDatabaseNameProvider"
p:username="${mysql.username}"
p:password="${mysql.password}"
p:defaultAutoCommit="false"
p:connectionProperties="sessionVariables=storage_engine=InnoDB">
<aop:scoped-proxy proxy-target-class="false" />
</bean>
The EntityManagerFactory:
<bean id="jpaVendorAdapter"
class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"
p:database="MYSQL"
p:generateDdl="true"
p:showSql="true" />
<util:properties id="jpaProperties">
<!-- omitted for readability -->
</util:properties>
<bean id="jpaDialect"
class="org.springframework.orm.jpa.vendor.HibernateJpaDialect" />
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"
p:packagesToScan="com.example.model.core"
p:jpaVendorAdapter-ref="jpaVendorAdapter"
p:dataSource-ref="jpaDataSource"
p:jpaDialect-ref="jpaDialect"
p:jpaProperties-ref="jpaProperties" />
The PlatformTracsactionManager:
<bean id="transactionManager"
class="org.springframework.orm.jpa.JpaTransactionManager"
p:dataSource-ref="jpaDataSource"
p:entityManagerFactory-ref="entityManagerFactory" />
<tx:annotation-driven proxy-target-class="false" mode="proxy"
transaction-manager="transactionManager" />

Changes via JDBC update are not visible in sequential JDBC select

I have a code with this structure (there are a lot of classes, but schema is like this:
void f() {
MyObj o = db.getById(id);
o.setField1(value);
db.update(o);
o = db.getById(id);
assertEquals(value, o.getField());
}
update and get methods use the same data source, incjected with Spring. get works via JdbcTemplate and update just takes connection from dataSource and uses raw JDBC.
Update is marked with #Transactional annotation.
here is a definition of transacion manager from Spring config:
<tx:annotation-driven transaction-manager="TransactionManager"/>
<bean id="TransactionManager"
class="org.springframework.orm.hibernate3.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory"/>
</bean>
The issue is that if I use update and after it get in different calls of webservice methods, that use them, for exampple, the result is correct and I get updated values.
And if I call them sequentially in one unit-test method after update I don't see updated value.
I can't post the whole read/write code here, because it is large and splitted into many files, but probably you have some ideas how to fix it.
Thanks.
You have to flush the update before you can see it in the select.
you can try
entityManager.refresh(yourEntity);
this way, you will get your entities recent instance where you use it after this row.

Using Spring Framework is it possible to connect two different databases based on some business logic

I have a web application which connects to an Oracle database. The application is now going to have a new set of users. A new db is being planned for this new set of users. Is it possible to connect to the appropriate db based on the user who logs in. As of now the database configuration is done through JNDIName entry in an xml file.
Absolutely. For a given DAO class (assuming you're using DAOs), create two bean definitions, one for each database, and then pick which DAO bean you want to use in your business logic:
<bean id="dao1" class="com.app.MyDaoClass">
<property name="dataSource" ref="dataSource1"/>
</bean>
<bean id="dao2" class="com.app.MyDaoClass">
<property name="dataSource" ref="dataSource2"/>
</bean>
Where dao1 and dao2 are the DataSource beans representing your two different databases.
At runtime, your business logic selects dao1 or dao2 appropriately.
I'd suggest injecting both the data sources into your DAOs and then within your DAO decide the correct data source to use based on the current user. The current user can be passed to the DAO from your presentation/service layer.

Categories