Websphere 7.0.0.17 error with Pool Connection - java

Im having a problem with an application, I'm using EclipseLink 2.4.1 with oracle 11g, I'm getting a J2CA0045E error, because the pooling does not set free any connections pool from the jndi,
Have you have this problem?, apparently the connection does not get released and it reach the limit I've configured.
Any ideas, this is my Connection pool
Connection timeout: 180
Maximum connections: 10
Minimum connections: 1
Reap time: 60
Unused timeout: 180
Aged timeout: 120
Purge policy: Entire Pool
EDIT
I have a web service in the same service, and the behavior is the same, using same jndi, the connection increase and does not release any connection
initCtx = new InitialContext();
Context envCtx = (Context) initCtx.lookup("java:comp/env");
DataSource ds = (DataSource) envCtx.lookup("jdbc/abc");
conn = ds.getConnection();

Based on the code provided, it looks like you are storing the Connection reference at a class scope. You should avoid storing references to Connection objects that could span multiple requests.
Instead, store a reference to the DataSource.
To illustrate. This approach is preferred:
public class GoodService {
DataSource ds;
public GoodService() throws NamingException {
// Only the DataSource reference is stored, this is allowed
ds = new InitialContext().lookup("java:comp/env/jdbc/abc");
}
public void doSomething() throws Exception {
// Get the connection in a request.
Connection conn = ds.getConnection();
try {
// do something...
} finally {
// Always close the connection in the finally block
// so it gets returned to the pool no matter what
conn.close();
}
}
}
The following approach is considered bad practice, and is likely to cause the J2CA0045E error you are seeing:
public class BadService {
Connection conn; // BAD: don't store a Connection across requests!
public BadService() throws NamingException, SQLException {
DataSource ds = new InitialContext().lookup("java:comp/env/jdbc/abc");
// BAD: don't leave connections open indefinitely like this
conn = ds.getConnection();
}
public void doSomething() throws Exception {
// do something using conn...
}
}

Related

Java ComboPooledDataSource exceeds pool size and does not reuse

I have the following code that gets called by an other application (That I can not change) to read form a Database.
The method is called in a loop very often and DOSes the DB.
At The DB I can see that there are many connecetions opend ... increasing to some hundred ... and than the DB crashed due to the load.
// Called in a loop
private <T> T execute(String query, PostProcessor<T> postProc, PreProcessor preProcs) throws OperationFailedException {
try (Connection conn
= Objects.requireNonNull(dataSourceRef.get(), "No Connection").getConnection();
PreparedStatement smt = conn.prepareStatement(query)) {
preProc.process(smt);
return postProc.process(smt.executeQuery());
} catch (SQLException e) {
throw new OperationFailedException(e.getMessage(), e);
}
}
the date source gets initialised before ...
// class variable
// AtomicReference<PooledDataSource> dataSourceRef = new AtomicReference<PooledDataSource>();
// Init method
ComboPooledDataSource cpds = new ComboPooledDataSource();
cpds.setDriverClass(config.getConnectionDriver());
cpds.setJdbcUrl(config.getConnectionString());
cpds.setUser(config.getConnectionUser());
cpds.setPassword(config.getConnectionPassword());
// cpds.setMaxPoolSize(10);
dataSourceRef.getAndSet(cpds);
My question why is this happening.
I thought due to the pooling not for each query a new connections should be used.
As well by setting the max pool size this is not working.
As well I tried it with try-catch-finally construct and closing the stm and the conn after use.
(As I've read somewhere that finally might get called delayed under high load scenarios ... I thouht that might be the case)
But still why is the pool size exceeded?
How can I limit that and block the method untill a connection is avalible again before continuing?
During connection polling in c3p0 you have to consider some options. Those are given bellow application.property file:
db.driver: oracle.jdbc.driver.OracleDriver // for Oracle
db.username: YOUR_USER_NAME
db.password: YOUR_USER_PASSWORD
db.url: DATABASE_URL
minPoolSize:5 // number of minimum poolSize
maxPoolSize:100 // number of maximum poolSize
maxIdleTime:5 // In seconds. After that time it will realease the unused connection.
maxStatements:1000
maxStatementsPerConnection:100
maxIdleTimeExcessConnections:10000
Here, maxIdleTime is the main points. It defines how many seconds this will release unused connection. It is in second.
Another is minPoolSize . It defines how many connection it will hold during idle mode.
Another is maxPoolSize . It defines how many maximum connection it will hold during loaded mode.
Now, how you configure ComboPooledDataSource? Here is the code:
#Bean
public ComboPooledDataSource dataSource(){
ComboPooledDataSource dataSource = new ComboPooledDataSource();
try {
dataSource.setDriverClass(env.getProperty("db.driver"));
dataSource.setJdbcUrl(env.getProperty("db.url"));
dataSource.setUser(env.getProperty("db.username"));
dataSource.setPassword(env.getProperty("db.password"));
dataSource.setMinPoolSize(Integer.parseInt(env.getProperty("minPoolSize")));
dataSource.setMaxPoolSize(Integer.parseInt(env.getProperty("maxPoolSize")));
dataSource.setMaxIdleTime(Integer.parseInt(env.getProperty("maxIdleTime")));
dataSource.setMaxStatements(Integer.parseInt(env.getProperty("maxStatements")));
dataSource.setMaxStatementsPerConnection(Integer.parseInt(env.getProperty("maxStatementsPerConnection")));
dataSource.setMaxIdleTimeExcessConnections(10000);
} catch (PropertyVetoException e) {
e.printStackTrace();
}
return dataSource;
}
For details implementation, please check this thread . Here i added practical implementation
Edit (Getting connection)
You can get connection using bellow way:
Session session = entityManager.unwrap(Session.class);
session.doWork(connection -> doSomeStuffWith(connection));
How you get EntityManager?
#PersistenceContext
private EntityManager entityManager;
Hope this will help you.
Thanks :)

create another connection in transaction mysql

I want know if it is possibile create two connection different connections in one transaction, So suppose to have this code( I do a create operation in table "employe" and I create a log in table "log"):
try {
InitialContext initialContext = new InitialContext();
DataSource datasource = (DataSource) initialContext.lookup("java:/comp/env/jdbc/postgres");
Connection connection_db= datasource.getConnection();
PreparateStatement p1 //Preparate statement to put the employe parameter
connessione_db.setAutoCommit(false);
connessione_db.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE);
p1.execute();
//This create another connect
LOGStatic.createLog("CREATE EMPLOYE");
connessione_db.commit();
connessione_db.setAutoCommit(true);
}catch(....){
connection_db.rollback();
}
This is the method LOGStatic.createLog("CREATE EMPLOYE");
public static void creaLogException(String messaggio) {
Connection connection_db= null;
InitialContext initialContext;
try {
initialContext = new InitialContext();
DataSource datasource = (DataSource) initialContext.lookup("java:/comp/env/jdbc/postgres");
connection_db= datasource.getConnection();
// the code continues with the save operation
} catch (NamingException n) {
}
// actual jndi name is "jdbc/postgres"
catch (SQLException s) {
}
My question if is it possible this behavior and there aren't problem with commit and rollback operatio or it is not possible to have another connection?
Anyone can help
At least in JDBC, the transaction is tightly coupled with the Connection.
In a Java EE server, if you are writing session beans, then the transaction will be managed my the server. So, in that case, you could call several methods, and the transaction will follow the method calls.
In JDBC the simple solution is to not close the connection, but to pass it around to different methods as an input parameter.
But, be very careful, not closing connections is almost a sure shot way of getting to OutOfMemoryError.
(Instead of passing around the connection as input parameters to various methods, you could use ThreadLocal. But that is another source of memory leak, if you don't clean up ThreadLocal variables.)
For more information on how transaction-lifecycle and Connection are tied in JDBC, refer this: How to start a transaction in JDBC?
Note: Even in JavaEE nested transaction is not possible as nested transactions is not supported in JTA.
Passing the connection around:
public static void createEmployee(){
InitialContext initialContext = new InitialContext();
DataSource datasource = (DataSource) initialContext.lookup("java:/comp/env/jdbc/postgres");
Connection connection_db= datasource.getConnection();
try {
connessione_db.setAutoCommit(false);
connessione_db.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE);
PreparateStatement p1 //Preparate statement to put the employe parameter
p1.execute();
//This create another connect
createLog("CREATE EMPLOYE", connessione_db);
connessione_db.commit();
//connessione_db.setAutoCommit(true); //No need
}catch(....){
try{ connection_db.rollback(); }catch(Exception e){ /*You can also check some flags to avoid exception*/ }
}finally{
try{ connection_db.close(); }catch(Exception e){ /*Safe to ignore*/ }
}
}
public static void createLog(String messaggio, Connection connection_db) {
try {
// the code continues with the save operation
} catch (SQLException s) {
}
}

H2 connection pool

I want to make a connection pool for my h2 database. But I think my pool opens new connection every time I call getConnection(). I guess there should be a fixed amount of reusable connections, but if I run this code :
Connection conn = DataSource.getInstance().getConnection();
Statement stmt = conn.createStatement();
ResultSet rs;
rs = stmt.executeQuery("SELECT * FROM NODE_USERS;");
while (rs.next()) {
System.out.println(rs.getString("login"));
}
try {
// wait a bit
Thread.sleep(20000);
} catch (InterruptedException e) {
e.printStackTrace();
}
stmt.close();
rs.close();
conn.close();
DataSource:
public class DataSource {
private static volatile DataSource datasource;
private BasicDataSource ds;
private DataSource() throws IOException, SQLException, PropertyVetoException {
ds = new BasicDataSource();
ds.setUsername("sa");
ds.setPassword("sa");
ds.setUrl("jdbc:h2:tcp://localhost/~/test");
ds.setMinIdle(5);
ds.setMaxActive(10);
ds.setMaxIdle(20);
ds.setMaxOpenPreparedStatements(180);
}
public static DataSource getInstance() throws IOException, SQLException, PropertyVetoException {
if (datasource == null) {
synchronized (DataSource.class) {
if (datasource == null) {
datasource = new DataSource();
}
}
datasource = new DataSource();
}
return datasource;
}
public Connection getConnection() throws SQLException {
return this.ds.getConnection();
}
}
and then execute select * from information_schema.sessions;, there will be two rows. What is wrong? Also I was trying H2 tutorial example, but I've got the same result.
You are using a connection pool, namely BasicDataSource. It will create a configured number of connections initially and then when getConnection() is called, it will either reused a free pooled connection or create a new one, up to a configured limit (or no limit if configured so). When the connection obtained is "closed" using Connection.close(), it is actually returned to the pool instead of being closed right away.
You basically have no control over how many open connctions there will be at a given time, apart of configuring the minimum and maximum allowed open connections. You observing two open connections therefore doesn't prove anything. If you want to see whether there is a limit on open connections, configure the BasicDataSource to use at most 2 connections using BasicDataSource.maxActive(), then try to obtain three connections at the same time. And for another test, with the same configuration, try obtaining two connections, returning them using Connection.close() and then obtaining another two connections.

Spring JdbcTemplate with DBCP BasicDataSource still closes connections

I'm using postgres jdbc driver to connect to Amazon RedShift. Here are also BasicDataSource from DBCP 2.0.1 and JdbcTemplate from Spring 4. I use DataSourceTransactionManager with Transactional annotations.
It looks like DataSource still keeps on creating new connections!
// that is how dataSource is created
BasicDataSource dataSource = new BasicDataSource() {
public Connection getConnection() throws SQLException {
Connection c = super.getConnection();
System.out.println("New connection: " + c);
return c;
}
};
dataSource.setUsername(env.getProperty(USERNAME));
dataSource.setPassword(env.getProperty(PASSWORD));
dataSource.setDriverClassName(env.getProperty(DRIVER_CLASS));
dataSource.setUrl(env.getProperty(CONNECTION_URL));
and I see in console for each operation another Connection object (they have different hashcodes). If I switch to SingleConnectionDataSource all works as expected, with a single connection object.
Before call to jdbcTemplate#execute I use TransactionSynchronizationManager.isActualTransactionActive to see that transactions are working (they are)...
What could I miss then? Why transactions are closed? Or what more can I do to investigate the problem. The url also have tcpKeepAlive=true parameter...
UPD thanks to Evgeniy, I've changed the code to see when connections are really created:
BasicDataSource dataSource = new BasicDataSource() {
protected ConnectionFactory createConnectionFactory() throws SQLException {
final ConnectionFactory cf = super.createConnectionFactory();
return new ConnectionFactory() {
public Connection createConnection() throws SQLException {
Connection c = cf.createConnection();
System.out.println("New connection from factory: " + c);
return c;
}
};
}
};
//dataSource.setMaxIdle(0);
Now I really see that only two connections were created (and if I add setMaxIdle(0) they are instead recreated before each query).
So my suspicion was wrong and pool works as expected. Thanks a lot!
Different hash codes do not prove they are different physical connections. Try to watch sessions on the database and you will see that close on connection from BasicDataSource does not close a physical connection.

Creating JDBC transaction for second data source with existing connection pool in Glassfish

I have main data source for working with database, which uses some connection pool. Now I want to create a second data source, which would use the same connection pool to perform logging operations in a separate transaction, then main database transaction! As far as I understood from the Glassfish docs if multiple data sources are using the same connection pool, then they share a transaction until connection is not closed (I might be wrong, please correct me).
So, is there a way to start a new transaction, when getting a connection to a data source? By setting TransactionIsolation may be?
Connection is retrieved in the following way:
private synchronized Connection getConnection() {
if (connection == null) {
try {
final Context ctx = new InitialContext();
final DataSource ds = (DataSource) ctx.lookup(getDataSourceLookupAddress());
connection = ds.getConnection();
} catch (final NamingException e) {
errorHandler.error("Datasource JNDI lookup failed: " + dataSourceLookupAddress + "!");
errorHandler.error(e.toString());
} catch (final SQLException e) {
errorHandler.error("Sql connection failed to " + dataSourceLookupAddress + "!");
errorHandler.error(e.toString());
}
}
return connection;
}
I have figured it out. It's necessary to start a new Thread, then a new transaction will be created. See also Writing to database log in the middle of rollback

Categories