Spring JdbcTemplate alter session - java

I want to alter Oracle session for every connection that I get from the connection pool.
I found that it can be done by simply execute a statement. See here.
Is there a way to hook into the jdbc template or the datasource and execute a statement after the connection pool creates a new connection.
I'm using Spring Boot and creating the datasource that way:
#Bean
#ConfigurationProperties(prefix="datasource.local")
public DataSource localDataSource() {
return DataSourceBuilder.create().build();
}

There are a lot of ways to do so.
The first one:
DataSource is an interface, so why don't you implement it yourself (use Proxy pattern)? Create something like this:
class MyDataSource implements DataSource {
private DataSource realDataSource;
public Connection getConnection() {
Connection c = realDataSource.getConnection();
// do whatever you want to do and
return c;
}
}
All other methods will delegate directly to realDataSource.
This proxy can be used in a provided code snippet.
You can use some AOP - just provide an advice that after get connection is created will run and do whatever you need there. Basically it's the same proxy but automatically created by Spring.

Related

Issue closing data source connection

I have a data source that is setup & then, used by a third party software to execute sql. After the sql is run I have another bean that executes & closes the connection.
#Bean
public DataSource datasource() {
HikariConfig myconfig = new HikariConfig();
...
return new HikariDataSource(myconfig);
}
#Bean
#DependsOn("sqlproject")
public void closeConnection() throws SQLException {
Connection c = datasource().getConnection();
try {
c.close();
}
finally {
System.out.println(c.isClosed());
}
}
However, I can clearly still make local calls using that datasource connection to particular data. Should I not be calling datasource() because this creates a new instance ? What am I doing wrong ?
You're right, when you're calling datasource from configuration class, new instance is created because Spring AOP doesn't support self invocation via this.
Moreover, even if you have used this AOP correctly, this wouldn't have closed DataSource, since you're creating new connection (via getConnection() and then closing it.
If you want to close HikariDataSource, then you need to call HikariDataSource#close.
Also, you don't need #Bean annotation on your void method since it makes no sense.

How to inject a datasource with autocommit on or off whether a method is run inside a transaction or not

I have got:
a DAO class in JOOQ that in its constructor takes a javax.sql.DataSource, which is injected by Guice
a service class that calls the methods from the DAO class
I want to:
be able to annotate a few methods inside the service class as methods requiring a transaction
Possible solution: https://github.com/jOOQ/jOOQ/tree/master/jOOQ-examples/jOOQ-spring-guice-example i.d.:
a method interceptor invoked when a method is annotated with the Transactional annotation
the method interceptor calls rollback() if there were exceptions or commit() if everything was fine
Guice provides a javax.sql.DataSource (BoneCP pooled connection)
the BoneCP pooled connections have the defaultAutoCommit attribute set to false
Finally, my question:
I want the datasource with autocommit set to false to be injected into all the methods that are called from the method annotated with #Transactional, in all the other cases the datasource with autocommit set to true
How this can be achieved?
There is a simple but not elegant way: create two DataSource instances and inject appropriately.
There is also a bit more complex way with a single DataSource instance. Roughly:
AutoCommit is a property of java.sql.Connection and there is a setter for it.
Implement org.jooq.ConnectionProvider to make JOOQ use your DataSource instance.
This ConnectionProvider implementation will have a special method, e.g. startTransaction() that can create a connection with autocommit=false and cache it in a ThreadLocal member. I.e.
Connection conn = dataSource.createConnection();
conn.setAutoCommit(false);
threadLocal.set(conn);
return conn;
ConnectionProvider.acquire() will do smth like (simplified version):
return threadLocal.get() != null ?
threadLocal.get() :
dataSource.createConnection();
Other two "special" methods would be commit() and rollback() - they will do a corresponding operation on the cached connection, close the connection and remove it from threadLocal.
The #Transactional method interceptor will call the "special" methods
try {
connectionProvider.startTransaction();
interceptedMethod.invoke();
connectionProvider.commit();
} catch (Exception e) {
connectionProvider.rollback();
}
Essentially, this is the most simple transaction manager.

Embedded Derby in OSGi, creating multiple connection using connection pool

I want to create instances of a class which will have access to the underlying Embedded derby database and pass this class to each bundle binding to my database bundle using declarative services.
I have seen in the derby documentation that sharing one connection for multiple threads has many pitfalls. So I was thinking to create a connection for each instance of the class I am creating. Since I only want a very simple way to just create multiple connections and manage them, using "MiniConnectionPoolManager" here seems like a good option. The sample code for derby is shown below:
org.apache.derby.jdbc.EmbeddedConnectionPoolDataSource dataSource = new org.apache.derby.jdbc.EmbeddedConnectionPoolDataSource();
dataSource.setDatabaseName("c:/temp/testDB");
dataSource.setCreateDatabase("create");
MiniConnectionPoolManager poolMgr = new MiniConnectionPoolManager(dataSource, maxConnections);
...
Connection connection = poolMgr.getConnection();
...
connection.close();
But the documentation does not cover many things plus I am a beginner in using Database. My questions are:
When I am creating a new class that will need database connection to perform insert,update & other actions. Shall I pass the 'poolMgr' and call poolMgr.getConnection() from the newly created class?
When should I close this connection? I don't know for how long the bundle (user) will use the new class so shall I save the newly created connection in a private global variable and force the user to execute unregister class where I could then close the connection? Or shall I just close all connections when my database bundle is being deactivated.
Other suggestions are also appreciated to manage different classes accessing one database. Thank you in advance.
Edit:
The main class in my database bundle is always active as long as the application is running. It is the bundles requesting for an instance of a new class(performing database operation) that come and go. And also since it will be deployed in embedded system, I can only use small footprint applications.
You should get a connection from a connection pool when you need it and close the connection as soon as you can. It is the job of the connection pool to re-use connections, not yours.
In other words: Do not keep a connection alive until your consumer bundle is deactivated.
Connection pools normally implement DataSource interface, you should use the pools via it. In that case you can replace the pool implementation easily without changing your code. E.g:
#Component
public class MyComponent {
// Connection pool based DataSource
#Reference
DataSource dataSource;
public void myFunction() {
try (Connection c = dataSource.getConnection()) {
// Database operations
} catch (SQLException e) {
// TODO
}
}
}
When you find yourself repeating the same code many times (getting connection, catching SQLException), you can write a simple component that accepts functional interfaces. E.g.:
#Component
#Service
public class SQLHelper {
#Reference // This is a connection pool DataSource
private DataSource dataSource;
public <R> R execute(Callback<R> callback) {
try (Connection c = dataSource.getConnection()) {
return callback.call(c);
} catch (SQLException e) {
throw new UncheckedSQLException(e);
}
}
}
Your functional interface would look like this:
public interface Callback<R> {
R call(Connection connection);
}
And you would use it like this:
sqlHelper.execute((Connection c) -> {
// Do some stuff with the connection
});
Using transactions
If you want to use atomic transactions, I suggest that you should use org.apache.derby.jdbc.EmbeddedXADataSource together with org.apache.commons.dbcp.managed.BasicManagedDataSource from commons-dbcp. After that, you can handle transactions via JTA.
It is hard to use the JTA API directly. You should choose a library that helps you propagating transactions.
A small guide based on Declarative Services:
Install derby jar into your OSGi container
Install pax-derby bundle as well! By doing that, you will have a DataSourceFactory OSGi service
Install everit-dsf-bundle with its dependencies! You will see two new DS components. Create a configuration for the one called XADataSource via the webconsole! All configuration options have descriptions.
Install a JTA Transaction Manager into the OSGi container! You have several choices. I
Install everit-commons-dbcp-component with its dependencies! You will see two new DS components. Configure the Managed one in the webconsole and set the previously created XADataSource as the target! The transactional pool will take care of providing the same connection if you request-and-close connections whitin the scope of the same transaction.
normally use Aries Transaction Manager that embeds Geronimo TM.
Install everit-transaction-helper to your OSGi container! You will see a new OSGi service with the interface TransactionHelper (that is provided by a configurable DS component).
Now you have everything to write your code. Your component would similar to the following:
#Component
#Service
public class MyComponent {
#Reference
private DataSource dataSource;
#Reference
private TransactionHelper th;
public void myFunction() {
th.required(() -> {
try (Connection c = dataSource.getConnection()) {
// My SQL statements
} catch (SQLException e) {
// TODO
}
}
}
}
In case you do not need transaction handling, you can:
use the standard EmbeddedDataSource
use any non-transactional connection pool
skip the installation of the TransactionManager and TransactionHelper bundles
skip the usage of TransactionHelper from the code
A more complex guide (that also takes care of schema creation and uses OO based queries) is available at http://cookbook.everit.org/persistence/index.html.
Update
You do not have to get a connection for every SQL statement. You should get a connection, execute as many SQL statements that you can within a "moment" and than call close on the connection.
If you have to run three SQL statements right behind each other, you should request a connection, execute the three SQL statements and than call close on the connection
If you close the requested connection within the same function you requested it from the pool, you probably do things right. You might call other functions passing the connection as a parameter, but they should only use it to run SQL statements and than return.
You should not keep alive a connection and wait for another user action. That is the job of the connection pool. When you call close on a connection that is provided by a pool, the connection is not closed physically, but only retrieved to the pool.
You should keep the connection object in a local variable. If you use a member variable for your connection object, you should suspect that something is wrong with your code (the only exception is if you pass the Connection to an object that lives for a very short time and that object holds the connection in a member variable to have cleaner code).
Please note that if you use Java 6 or earlier, you should close the connection in a finally block to avoid unclosed connections.
MiniConnectionPoolManager might be a great solution for embedded devices as it is really "mini". The only issue is that it does not implement the DataSource interface so your business code shuold directly use the MiniCPM classes. By doing that, it will be much harder to switch to other Connection pool if you find a bug or you need a more complex pool later.
If you decide to use MiniCPM, I suggest that you should write a component that implements DataSource and delegates the getConnection() function to a MiniCPM instance. E.g.:
#Component
#Service
public class MiniCPMDataSourceComponent implements DataSource {
#Reference
protected ConnectionPoolDataSource cpDataSource;
private MiniConnectionPoolManager wrapped;
#Activate
public void activate() {
this.wrapped = new MiniConnectionPoolManager(cpDataSource);
}
#Override
public Connection getConnection() {
return wrapped.getConnection();
}
#Override
public Connection getConnection(String user, String password) {
throw new UnsupportedOperationException();
}
#Deactivate
public void deactivate() {
wrapped.dispose();
}
}
You can decorate this component with configuration possibilities like the max connection number and timeout (that is supported by MiniCPM). If you use the service that is provided by this component, you will be able to switch the connection pool without changing your business code. Also, your business bundle will not be wired directly to MiniCPM.

Spring programmatic transaction management caveat?

Spring supports programmatic transaction which give us fine grained control over TX management. According to Spring Documentation, One can use programmatic TX management by:
1. utilizing Spring's TransactionTemplate:
transactionTemplate.execute(new TransactionCallbackWithoutResult() {
protected void doInTransactionWithoutResult(TransactionStatus status) {
try {
updateOperation1();
updateOperation2();
} catch (SomeBusinessExeption ex) {
status.setRollbackOnly();
}
} });
2. leveraging PlatformTransactionManager directly(inject a PlatformTransactionManager implementation into DAO):
DefaultTransactionDefinition def = new DefaultTransactionDefinition();
def.setName("SomeTxName");
def.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRED);
//txManager is a reference to PlatformTransactionManager
TransactionStatus status = txManager.getTransaction(def);
try {
updateOperation1();
updateOperation2();
}
catch (MyException ex) {
txManager.rollback(status);
throw ex;
}
txManager.commit(status);
for the sake of simplification, let's say we are dealing with JDBC database operation.
I am wondering for any database operations happened at updateOperation1(),updateOperation2() in the second snippet, either it is implemented with JDBCTemplate or JDBCDaoSupport, if not, the operation is actually not performed within any transaction, is it?
My analysis is that if we don't use JDBCTemplate or JDBCDaoSupport, we inevitably will create/retrieve connection from datasource management. the connection we get is of course not the connection used by PlatformTransactionManager underlying to manage transaction.
I dug Spring source code and skim related class found that PlatformTransactionManager will try to retrieve a connection contained in ConnectionHolder which in return retrieved from TransactionSynchronizationManager. I also found JDBCTemplate and JDBCDaoSupport, also try to get connection with similar routine from TransactionSynchronizationManager.
Because TransactionSynchronizationManager manages many resource including connection per thread(basically use Threadlocal to ensure one thread get its own unique instance of the managed resource)
So I think the connection retrieved by PlatformTransactionManager and JDBCTemplate or JDBCDaoSupport is just same, this can explain how spring programmatic transaction ensure updateOperation1(),updateOperation2() were guarded by transaction.
Is my analysis correct? if it is, why Spring documentation hasn't emphasized this caveat?
Yes, it's correct.
Any code that uses raw Connections should obtain them from the DataSource in special way in order to participate in transactions managed by Spring (12.3.8 DataSourceTransactionManager):
Application code is required to retrieve the JDBC connection through DataSourceUtils.getConnection(DataSource) instead of Java EE's standard DataSource.getConnection.
Another option (if you cannot change code that calls getConnection()) is to wrap your DataSource with TransactionAwareDataSourceProxy.

Configuring DAO factory with Pooled DataSource

I'm after a bit of advice regarding configuring a DAO factory with a pooled datasource. Suppose its a JDBC DAO factory (from an abstract factory) and the pooled datasource is configured and managed by the application server e.g. Glassfish
When the factory is created for the first time (Singleton pattern) it does a JNDI lookup for the pooled datasource e.g. from a properties file, which will set the pooled datasource on the JDBC DAO factory.
Then, when you instantiate and return the concrete DAO would you pass it a reference to datasource so it could retrieve a connection to the database?
Basically what I did was encapsulate that datasource as a field in a base class called DAO. In the constructor of the DAO you pass in the JNDI name of the connection you want.
public DAO(String jndiName) throws NamingException {
ds = DataSourceFactory.getInstance().lookup(jndiName);
}
Then in all of your concrete classes you simply extend from DAO and can use the datasource as you want.
public concreteDAO() throws NamingException {
super("Some JNDI Name That this DAO should know");
}
The same DAO class has some other utility methods like a cleanup method, that silently closes the ResultSet, Statements and Connections. So that way I just have to add this in the finally clause of all my methods.

Categories