I have a use case where we have to silently replace the db with the changes made on one node in the cluster to the other node. The other node should not restart the process but should refresh the DB connection so that it gets the new changes when the db file(SQLite) has been replaced. Is there a way to do this in JOOQ ?. I have not found any relevant API to refresh the DB connection.
I tried to do the following but I get a deadlock -
public static void closeDBConnection(Configuration config) {
try {
Connection connection = getDBConnection(config);
config.connectionProvider().release(connection);
if (!connection.isClosed()) {
connection.close();
}
} catch(SQLException ex) {
// throw new ReAttachException();
}
}
public static Connection getDBConnection(Configuration config) {
return config.connectionProvider().acquire();
}
In the calling method -
private void reattach(FooRecord record, Configuration config) {
record.detach();
DBUtils.closeDBConnection(config);
DBUtils.getDBConnection(config);
record.attach(config);
}
jOOQ doesn't manage your connection for you, you'll have to do that either yourself, or by using your JDBC drivers' or connection pool's capabilities.
In particular, in your current attempts, you're calling jOOQ's SPI ConnectionProvider, which you shouldn't call. An SPI is intended for you to implement and for jOOQ to use and call. This means that your implementation should already handle the connection replacement, and jOOQ shouldn't notice anything about that:
class MyConnectionProvider implements ConnectionProvider {
#Override
public Connection acquire() {
// Do your own reconnection magic here
connection = ...
// Pass this connection to jOOQ. jOOQ should assume it will always work.
return connection;
}
#Override
public void release(Connection connection) {
// Close or return to the pool, etc.
}
}
Related
I ran into some code and I wanted to research other people's approaches to it, but I'm not sure what the design pattern is called. I tried searching for "database executer" and mostly got results about Java's Executor framework which is unrelated.
The pattern I'm trying to identify uses a single class to manage connections and execute queries through the use of functions that allow you to isolate any issues related to connection management.
Example:
// Service class
public Service {
private final Executor executor;
public void query(String query) {
ResultSet rs = (ResultSet) executor.execute((connection) -> {
Statement st = connection.createStatement();
return st.executeQuery(query);
});
}
}
// Executer class
public Executer {
private final DataSource dataSource;
public Object execute(Function function) {
Connection connection = dataSource.getConnection();
try {
return function(connection);
} catch(Exception e) {
log...
} finally {
// close or return connection to pool
}
}
}
As you can see from above, if you ever have a connection leak you don't need to search through a bunch of DAOs or services, it's all contained in a single executor class. Any idea what this strategy or design pattern is called? Anyone see this before or know of open source projects that utilize this strategy/pattern?
So, here is some background info: I'm currently working at a company providing SaaS and my work involves writing methods using JDBC to retrieve and process data on a database. Here is the problem, most of the methods comes with certain pattern to manage connection:
public Object someMethod(Object... parameters) throws MyCompanyException{
try{
Connection con = ConnectionPool.getConnection();
con.setAutoCommit(false);
// do something here
con.commit();
con.setAutoCommit(true);
}
catch(SomeException1 e){
con.rollback();
throw new MyCompanyException(e);
}
catch(SomeException2 e){
con.rollback();
throw new MyCompanyException(e);
}
// repeat until all exception are catched and handled
finally {
ConnectionPool.freeConnection(con);
}
// return something if the method is not void
}
It had been already taken as a company standard to do all methods like this, so that the method would rollback all changes it had made in case of any exception is caught, and the connection will also be freed asap. However, from time to time some of us may forget to do some certain routine things when coding, like releasing connection or rollback when error occurs, and such mistake is not quite easily detectable until our customers complaint about it. So I've decided to make these routine things be done automatically even it is not declared in the method. For connection initiation and set up, it can be done by using the constructor easily.
public abstract SomeAbstractClass {
protected Connection con;
public SomeAbstractClass() {
con = CoolectionPool.getConnection();
con.setAutoCommit(false);
}
}
But the real problem is to make connection to be released automatically immediately after finishing the method. I've considered using finalize() to do so, but this is not what I'm looking for as finalize() is called by GC and that means it might not finalize my object when the method is finished, and even when the object will never be referenced. finalize() is only called when JVM really run out of memory to go on.
Is there anyway to free my connection automatically and immediately when the method finishes its job?
Use "try with resources". It is a programming pattern such that you write a typical looking try - catch block, and if anything goes wrong or you exit it, the resources are closed.
try (Connection con = ConnectionPool.getConnection()) {
con.doStuff(...);
}
// at here Connection con is closed.
It works by Connection extending Closeable, and if any class within the "resource acquisition" portion of the try statement implements Closeable then the object's close() method will be called before control is passed out of the try / catch block.
This prevents the need to use finally { ... } for many scenarios, and is actually safer than most hand-written finally { ... } blocks as it also accommodates exceptions throw in the catch { ... } and finally { ... } blocks while still closing the resource.
One of the standard ways to do this is using AOP. You can look at Spring Framework on how it handles JDBC tansactions and connections and manages them using MethodInterceptor. My advice is to use Spring in your project and not reinvent the wheel.
The idea behind MethodInterceptor is that you add a code that creates and opens connection before JDBC related method is called, puts the connection into the thread local so that your method can get the connection to make SQL calls, and then closes it after the method is executed.
You could add a method to your ConnectionPool class for example:
public <T> T execute(Function<Connection, T> query,
T defaultValue,
Object... parameters) {
try {
Connection con = ConnectionPool.getConnection();
con.setAutoCommit(false);
Object result = query.apply(conn);
con.commit();
con.setAutoCommit(true);
return result;
} catch(SomeException1 e) {
con.rollback();
throw new MyCompanyException(e);
}
//etc.
finally {
ConnectionPool.freeConnection(con);
}
return defaultValue;
}
And you call it from the rest of your code with:
public Object someMethod(Object... parameters) throws MyCompanyException {
return ConnectionPool.execute(
con -> { ... }, //use the connection and return something
null, //default value
parameters
);
}
I want create multhi-threading part of program (with GUI) for load data from very large DB table (over 30kk rows) by using Fork/Join Framework and RecursiveAction class, because many little query's execute faster then one large, checked experimentally.
For example every fork load 50 rows from needed 1000. Something like this:
class ForkLoader extends RecursiveAction{
private static Connection con;
private Map<Integer, Double> map; //In our case ConcurrentHashMap
private final static int seqThreshold=50;
private List<Integer> id; //List with id_field in DB table
int start,end;
{
try {
con=DriverManager.getConnection("jdbc:mysql://some_ip/some_db","username", "password");
} catch (SQLException e) {
e.printStackTrace();
}
}
public ForkLoader(Map<Integer, Double> map, List<Integer> id,int start, int end){
this.map=map;
this.start=start;
this.end=end;
this.id=id;
}
#Override
protected void compute() {
if((end-start)<seqThreshold){
try {
Statement stmt=con.createStatement();
for(int i=start;i<end;i++){
String query="text of some query" + id.get(i);
ResultSet rs=stmt.executeQuery(query);
map.put(id.get(i), /*some double form result set*/);
}
stmt.close();
} catch (SQLException e) {
e.printStackTrace();
}
} else {
int middle=(start+end)/2;
invokeAll(new ForkLoader(map, id, start, middle), new ForkLoader(map, id, middle,end));
}
}
}
Is it thread safety use one static connection for all forks? If you know better way to solve this task, show it
Usually this kind of applications do it with a Connection Pool, in this way every Thread can use one connection exclusively, and if you have more Threads than connection at the same time, the Threads that doesn't have a connection assigned must wait.
If you want a framework for implement Connection Pool you can check these:
Apache Commons DBCP http://commons.apache.org/proper/commons-dbcp/
C3PO http://sourceforge.net/projects/c3p0/
Apache jdbc-pool http://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html (thanks to MRalwasser)
In the other hand maybe you want delegate all the Thread and connection management to an EJB container, basically you can implement the same with EJBs and Connection Pool managed by some container, you can check the following containers:
Apache TomEE http://tomee.apache.org/
Oracle Glassfish https://glassfish.java.net/
Apache OpenEJB if you want only the EJB container without the full Java EE platform
In general, java.sql.Connection instances (JDBC connections) are not thread-safe; you should not use the same Connection object from multiple threads at the same time.
Create a separate Connection for each thread instead.
I want to using c3p0 for connection pooling in none-web application java program that i wrote. I used traditional singleton connection and I am not satisfied with its performance, so I decided to go for connection pooling. I take a look at c3p0 website and here is what they told about using c3p0:
ComboPooledDataSource cpds = new ComboPooledDataSource();
cpds.setDriverClass( "org.postgresql.Driver" ); //loads the jdbc driver
cpds.setJdbcUrl( "jdbc:postgresql://localhost/testdb" );
cpds.setUser("swaldman");
cpds.setPassword("test-password");
// the settings below are optional -- c3p0 can work with defaults
cpds.setMinPoolSize(5);
cpds.setAcquireIncrement(5);
cpds.setMaxPoolSize(20);
// The DataSource cpds is now a fully configured and usable pooled DataSource
I want to know how could i use that for ms sql windows authentication connection, but I could not figure it how? Also how can I set my query through that connection? It seems that using connection pooling is a whole different world than traditional database connection and I am new to that.
Here is what i figure out:
public class DatabaseManager {
private static DataSource dataSource;
private static final String DRIVER_NAME;
private static final String URL;
private static final String UNAME;
private static final String PWD;
private static final String dbName;
static {
dbName="SNfinal";
DRIVER_NAME = "com.microsoft.sqlserver.jdbc.SQLServerDriver";
URL = "jdbc:sqlserver://localhost:1433;" +
"databaseName="+dbName+";integratedSecurity=true";
UNAME = "";
PWD = "";
dataSource = setupDataSource();
}
public static Connection getConnection() throws SQLException {
return dataSource.getConnection();
}
private static DataSource setupDataSource() {
ComboPooledDataSource cpds = new ComboPooledDataSource();
try {
cpds.setDriverClass(DRIVER_NAME);
} catch (PropertyVetoException e) {
e.printStackTrace();
}
cpds.setJdbcUrl(URL);
cpds.setUser(UNAME);
cpds.setPassword(PWD);
cpds.setMinPoolSize(1000);
cpds.setAcquireIncrement(1000);
cpds.setMaxPoolSize(20000);
return cpds;
}
public static ResultSet executeQuery(String SQL, String dbName)
{
ResultSet rset = null ;
try {
Connection con=DatabaseManager.getConnection();
Statement st = con.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,ResultSet.CONCUR_READ_ONLY);
rset = st.executeQuery(SQL);
}
catch (SQLException e) {
System.out.println(e.getMessage());
System.exit(0);
}
return rset;
}
public static void executeUpdate(String SQL, String dbName)
{
try {
Connection con=DatabaseManager.getConnection();
Statement st = con.createStatement();
st.executeUpdate(SQL);
}
catch (SQLException e) {
System.out.println(e.getMessage());
System.exit(0);
}
}
}
When I use this class it works well for around 2000 queries, after that it stop working with some exceptions related to resource allocation!
if you are evolving an application from using a single cached Connection to using a Connection pool, the main thing you have to do is...
1) do not store any Connections as static or member variables of objects. store only a reference to the DataSource, cpds is the code sample above;
2) each time you need to use a Connection, call getConnection() on the the pool-backed DataSource;
3) be sure to close() the Connection after each use, reliably (i.e. in a finally block with each resource close wrapped in its own try/catch or, if your codebase is Java 7, via try-with-resources). if you don't, you'll eventually leak Connections and exhaust the pool. c3p0 has some hacks to help you with that, but the best advice is not to write leaky code.
however you authenticate to acquire a single Connection should be how you authenticate via the pool. do you have to do something special or unusual to authenticate in your environment?
So, big big problems.
First some odds and ends: System.exit(0) is a bad way to respond to Exceptions. Your methods accept a dbName parameter that has no function.
But the big, huge, bad problem is that you do no resource cleanup whatsoever. Your executeQuery and executeUpdate methods open Connections and then fail to close() them. That will lead to resource leaks in short order. If you want to open Connection inside methods like these, you have to return them in some manner so that they can be close()ed after use. That will get cumbersome. You can redefine the methods to accept Connecion objects, that is, something like...
ResultSet executeQuery( Connection con, String query ) {
...
}
...or better yet, just let your clients use the JDBC api directly, which will in fact be much simpler than using your execute methods that in fact do very little.
If you're codebase is Java 7, try-with-resources is a convenient way to ensure JDBC resources are cleaned up. If not, you'll have to use explicit finally clauses (with the calls to close() inside finally nested in their own try/catches).
As for the Exceptions you're seeing, their messages are pretty clear about the cause. You are using ResultSets after they have been close()ed. The question is why. I don't have a simple answer, but in general you are not being very clean about resource clean-up; I suspect that is the problem. I'm surprised you manage to get 2000 queries to ever run with this code, since you are leaking Connections and should have run out. So there are some mysteries. But one way or another, you are occasionally trying to use ResultSets after they have been close()ed, probably by some other Thread. Maybe you are doing something non-obvious to close() Connections, like using resultSet.getStatement(),getConnection() to find the resource you need to close(), and then close()ing the Connection before you've finished working with the ResultSet?
Good luck!
I am using Connection Pool (snaq.db.ConnectionPool) in my application. The connection pool is initialized like:
String dburl = propertyUtil.getProperty("dburl");
String dbuserName = propertyUtil.getProperty("dbuserName");
String dbpassword = propertyUtil.getProperty("dbpassword");
String dbclass = propertyUtil.getProperty("dbclass");
String dbpoolName = propertyUtil.getProperty("dbpoolName");
int dbminPool = Integer.parseInt(propertyUtil.getProperty("dbminPool"));
int dbmaxPool = Integer.parseInt(propertyUtil.getProperty("dbmaxPool"));
int dbmaxSize = Integer.parseInt(propertyUtil.getProperty("dbmaxSize"));
long dbidletimeout = Long.parseLong(propertyUtil.getProperty("dbidletimeout"));
Class.forName(dbclass).newInstance();
ConnectionPool moPool = new ConnectionPool(dbpoolName, dbminPool, dbmaxPool, dbmaxSize,
dbidletimeout, dburl, dbuserName, dbpassword);
DB Pool values used are:
dbminPool=5
dbmaxPool=30
dbmaxSize=30
dbclass=org.postgresql.Driver
dbidletimeout=25
My application was leaking connection somewhere (connection was not released) and due to which the connection pool was getting exhausted. I have fixed that code for now.
Shouldn't the connections be closed after idle timeout period? If that is not correct assumption, Is there any way to close the open idle connections anyway (through java code only)?
The timeout variable does not seem to correspond to the time the connection is being idle but to how much time the pool can wait to return a new connection or throw an exception (I had a look at this source code, don't know if it is up-to-date). I think that it would be rather difficult to keep track of "idle" connections because what "idle" really means in this case? You might want to get a connection for later use. So I would say that the only safe way for the connection pool to know that you are done with the connection, is to call close() on it.
If you are worried about the development team forgetting to call close() in their code, there is a technique which I describe below and I have used in the past (in my case we wanted to keep track of unclosed InputStreams but the concept is the same).
Disclaimer:
I assume that the connections are only used during a single request and do not span during consecutive requests. In the latter case you can't use the solution below.
Your connection pool implementation seems to already use similar techniques with the ones I describe below (i.e. it already wraps the connections) so I cannot possibly know if this will work for your case or not. I have not tested the code below, I just use it to describe the concept.
Please use that only in your development environment. In production you should feel confident that your code is tested and that it behaves correctly.
Having said the above, the main idea is this: We have a central place (the connection pool) from where we acquire resources (connections) and we want to keep track if those resources are released by our code. We can use a web Filter that uses a ThreadLocal object that keeps track of the connections used during the request. I named this class TrackingFilter and the object that keeps track of the resources is the Tracker class.
public class TrackingFilter implements Filter {
#Override
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {
Tracker.start();
try {
chain.doFilter(request, response);
} finally {
Tracker.stop();
}
}
...
}
For the Tracker to be able to keep track of the connections, it needs to be notified every time a connection is acquired with getConnection() and every time a connection is closed with a close() call. To be able to do that in a way that is transparent to the rest of the code we need to wrap the ConnectionPool and the returned Connection objects. Your code should return the new TrackingConnectionPool instead of the original pool (I assume the way to access the connection pool is at a single place). This new pool will wrap in turn, every Connection it provides, as a TrackableConnection. The TrackableConnection is the object that knows how to notify our Tracker when created and when closed.
When you call Tracker.stop() at the end of the request it will report any connections for which close() has not been called yet. Since this is a per request operation you will identify only the faulty operations (i.e. during your "Create new product" functionality) and then hopefully you will be able to track down those queries that leave open connections and fix them.
Below you can find code and comments for the TrackingConnectionPool, TrackableConnection and the Tracker class. The delegate methods were left out for brevity. I hope that helps.
Note: For the wrappers use an automated IDE feature (like Eclipse's "Generate delegate methods") otherwise it would be a time-consuming and error prone task.
//------------- Pool Creation
ConnectionPool original = new ConnectionPool(String dbpoolName, ...);
TrackingConnectionPool trackingCP = new TrackingConnectionPool(original);
// ... or without creating the ConnectionPool yourself
TrackingConnectionPool trackingCP = new TrackingConnectionPool(dbpoolName, ...);
// store the reference to the trackingCP instead of the original
//------------- TrackingConnectionPool
public class TrackingConnectionPool extends ConnectionPool {
private ConnectionPool originalPool; // reference to the original pool
// Wrap all available ConnectionPool constructors like this
public TrackingConnectionPool(String dbpoolName, ...) {
originalPool = new ConnectionPool(dbpoolName, ...);
}
// ... or use this convenient constructor after you create a pool manually
public TrackingConnectionPool(ConnectionPool pool) {
this.originalPool = pool;
}
#Override
public Connection getConnection() throws SQLException {
Connection con = originalPool.getConnection();
return new TrackableConnection(con); // wrap the connections with our own wrapper
}
#Override
public Connection getConnection(long timeout) throws SQLException {
Connection con = originalPool.getConnection(timeout);
return new TrackableConnection(con); // wrap the connections with our own wrapper
}
// for all the rest public methods of ConnectionPool and its parent just delegate to the original
#Override
public void setCaching(boolean b) {
originalPool.setCaching(b);
}
...
}
//------------- TrackableConnection
public class TrackableConnection implements Connection, Tracker.Trackable {
private Connection originalConnection;
private boolean released = false;
public TrackableConnection(Connection con) {
this.originalConnection = con;
Tracker.resourceAquired(this); // notify tracker that this resource is aquired
}
// Trackable interface
#Override
public boolean isReleased() {
return this.released;
}
// Note: this method will be called by Tracker class (if needed). Do not invoke manually
#Override
public void release() {
if (!released) {
try {
// attempt to close the connection
originalConnection.close();
this.released = true;
} catch(SQLException e) {
throw new RuntimeException(e);
}
}
}
// Connection interface
#Override
public void close() throws SQLException {
originalConnection.close();
this.released = true;
Tracker.resourceReleased(this); // notify tracker that this resource is "released"
}
// rest of the methods just delegate to the original connection
#Override
public Statement createStatement() throws SQLException {
return originalConnection.createStatement();
}
....
}
//------------- Tracker
public class Tracker {
// Create a single object per thread
private static final ThreadLocal<Tracker> _tracker = new ThreadLocal<Tracker>() {
#Override
protected Tracker initialValue() {
return new Tracker();
};
};
public interface Trackable {
boolean isReleased();
void release();
}
// Stores all the resources that are used during the thread.
// When a resource is used a call should be made to resourceAquired()
// Similarly when we are done with the resource a call should be made to resourceReleased()
private Map<Trackable, Trackable> monitoredResources = new HashMap<Trackable, Trackable>();
// Call this at the start of each thread. It is important to clear the map
// because you can't know if the server reuses this thread
public static void start() {
Tracker monitor = _tracker.get();
monitor.monitoredResources.clear();
}
// Call this at the end of each thread. If all resources have been released
// the map should be empty. If it isn't then someone, somewhere forgot to release the resource
// A warning is issued and the resource is released.
public static void stop() {
Tracker monitor = _tracker.get();
if ( !monitor.monitoredResources.isEmpty() ) {
// there are resources that have not been released. Issue a warning and release each one of them
for (Iterator<Trackable> it = monitor.monitoredResources.keySet().iterator(); it.hasNext();) {
Trackable resource = it.next();
if (!resource.isReleased()) {
System.out.println("WARNING: resource " + resource + " has not been released. Releasing it now.");
resource.release();
} else {
System.out.println("Trackable " + resource
+ " is released but is still under monitoring. Perhaps you forgot to call resourceReleased()?");
}
}
monitor.monitoredResources.clear();
}
}
// Call this when a new resource is acquired i.e. you a get a connection from the pool
public static void resourceAquired(Trackable resource) {
Tracker monitor = _tracker.get();
monitor.monitoredResources.put(resource, resource);
}
// Call this when the resource is released
public static void resourceReleased(Trackable resource) {
Tracker monitor = _tracker.get();
monitor.monitoredResources.remove(resource);
}
}
You don't have your full code posted so I assume you are not closing your connections. You STILL need to close the connection object obtained from the pool as you would if you were not using a pool. Closing the connection makes it available for the pool to reissue to another caller. If you fail to do this, you will eventually consume all available connections from your pool. A pool's stale connection scavenger is not the best place to clean up your connections. Like your momma told you, put your things away when you are done with them.
try {
conn = moPool.getConnection(timeout);
if (conn != null)
// do something
} catch (Exception e) {
// deal with me
} finally {
try {
conn.close();
} catch (Exception e) {
// maybe deal with me
}
}
E
The whole point of connection pooling is to let pool handle all such things for you.
Having a code for closing open idle connections of java pool will not help in your case.
Think about connection pool maintaining MAPs for IDLE or IN-USE connections.
IN-USE: If a connection object is being referenced by application, it is put in to in-use-map by pool.
IDLE: If a connection object is not being referenced by application / or closed, it is put into idle-map by pool.
Your pool exhausted because you were not closing connections. Not closing connections resulted all idle connections to be put into in-use-map.
Since idle-pool does not have any entry available, pool is forced to create more of them.
In this way all your connections got marked as IN-USE.
Your pool does not have any open-idle-connections, which you can close by code.
Pool is not in position to close any connection even if time-out occurs, because nothing is idle.
You did your best when you fixed connection leakage from your code.
You can force release of pool and recreate one. But you will have to be carefull because of existing connections which are in-use might get affected in their tasks.
In most connection pools, the idle timeout is the maximum time a connection pool is idle in the connection pool (waiting to be requested), not how long it is in use (checked out from the connection pool).
Some connection pools also have timeout settings for how long a connection is allowed to be in use (eg DBCP has removeAbandonedTimeout, c3p0 has unreturnedConnectionTimeout), and if those are enabled and the timeout has expired, they will be forcefully revoked from the user and either returned to the pool or really closed.
log4jdbc can be used to mitigate connection leak troubleshooting by means of jdbc.connection logger.
This technique doesn't require any modification of the code.