ORA-01000: maximum open cursors exceededwhen using Spring SimpleJDBCCall - java

We are using Spring SimpleJdbcCall to call stored procedures in Oracle that return cursors. It looks like SimpleJdbcCall isn't closing the cursors and after a while the max open cursors is exceeded.
ORA-01000: maximum open cursors exceeded ; nested exception is java.sql.SQLException: ORA-01000: maximum open cursors exceeded spring
There are a few other people on forums who've experienced this but seemingly no answers. It looks like me as a bug in the spring/oracle support.
This bug is critical and could impact our future use of Spring JDBC.
Has anybody come across a fix - either tracking the problem to the Spring code or found a workaround that avoids the problem?
We are using Spring 2.5.6.
Here is the new version of the code using SimpleJdbcCall which appears to not be correctly closing the result set that the proc returns via a cursor:
...
SimpleJdbcCall call = new SimpleJdbcCall(dataSource);
Map params = new HashMap();
params.put("remote_user", session.getAttribute("cas_username") );
Map result = call
.withSchemaName("urs")
.withCatalogName("ursWeb")
.withProcedureName("get_roles")
.returningResultSet("rolesCur", new au.edu.une.common.util.ParameterizedMapRowMapper() )
.execute(params);
List roles = (List)result.get("rolesCur")
The older version of the code which doesn't use Spring JDBC doesn't have this problem:
oracleConnection = dataSource.getConnection();
callable = oracleConnection.prepareCall(
"{ call urs.ursweb.get_roles(?, ?) }" );
callable.setString(1, (String)session.getAttribute("cas_username"));
callable.registerOutParameter (2, oracle.jdbc.OracleTypes.CURSOR);
callable.execute();
ResultSet rset = (ResultSet)callable.getObject(2);
... do stuff with the result set
if (rset != null) rset.close(); // Explicitly close the resultset
if (callable != null) callable.close(); //Close the callable
if (oracleConnection != null) oracleConnection.close(); //Close the connection
It would appear that Spring JDBC is NOT calling rset.close(). If I comment out that line in the old code then after load testing we get the same database exception.

After much testing we have fixed this problem. It is a combination of how we were using the spring framework and the oracle client and the oracle DB. We were creating new SimpleJDBCCalls which were using the oracle JDBC client's metadata calls which were returned as cursors which were not being closed and cleaned up. I consider this a bug in the Spring JDBC framework in how it calls metadata but then does not close the cursor. Spring should copy the meta data out of the cursor and close it properly. I haven't bothered opening an jira issue with spring because if you use best practice the bug isn't exhibited.
Tweaking OPEN_CURSORS or any of the other parameters is the wrong way to fix this problem and just delays it from appearing.
We worked around it/fixed it by moving the SimpleJDBCCall into a singleton DAO so there is only one cursor open for each oracle proc that we call. These cursors are open for the lifetime of the app - which I consider a bug. As long as OPEN_CURSORS is larger than the number of SimpleJDBCCall objects then there won't be hassles.

Well, I've got this problem when I was reading BLOBs. Main cause was that I was also updating table and the Statement containing update clause was not closed automatically. Nasty cursorleak eats all free cursors. After explicit call of statement.close() the error disappears.
Moral - always close everything, don't rely on automatic close after disposing Statement.

Just be careful setting OPEN_CURSORS to higher and higher values as there are overheads and it could just be band-aiding over an actual problem/error in your code.
I don't have experience with the Spring side of this but worked on an app where we had many issues with ORA-01000 errors and constantly adjusting OPEN_CURSORS just made the problem go away for a little while ...

I can promise you that it's not Spring. I worked on a Spring 1.x app that went live in 2005 and hasn't leaked a connection since. (WebLogic 9., JDK 5). You aren't closing your resources properly.
Are you using a connection pool? Which app server are you deploying to? Which version of Spring? Oracle? Java? Details, please.

Oracle OPEN_CURSORS is the key alright. We have a small 24x7 app running against Oracle XE with only a few apparently open cursors. We had intermittent max open cursors errors until we set the OPEN_CURSORS initialization value to > 300

The solution is not in Spring, but in Oracle: you need to set the OPEN_CURSORS initialization parameter to some value higher than the default 50.
Oracle -- at least as-of 8i, perhaps it's changed -- would reparse JDBC PreparedStatement objects unless you left them open. This was expensive, and most people end up maintaining a fixed pool of open statements that are resubmitted.
(taking a quick look at the 10i docs, they explicitly note that the OCI driver will cache PreparedStatements, so I'm assuming that the native driver still recreates them each time)

Related

Auto attached records get detached after commit

I have want to fetch a list of Relation records, and I'll use .fetchInto(RELATION); and I want to iterate over the list and commit each iteration to the database. This doesn't seem to be working for me because I get Caused by: java.sql.SQLException: Connection is closed while updating the record. I don't have this problem with regular jOOQ queries. And it doesn't seem any connections are closed.
When I use contact.attach(jooq().configuration()) it seems to be working again. How can I prevent it from detaching?
I start and commit a transaction through JPA.em().getTransaction().*.
org.jooq.exception.DataAccessException: SQL [update `Relation` set `Relation`.`organizationName` = ? where `Relation`.`id` = ?]; Connection is closed
at org.jooq_3.15.1.MYSQL.debug(Unknown Source)
at org.jooq.impl.Tools.translate(Tools.java:2979)
at org.jooq.impl.DefaultExecuteContext.sqlException(DefaultExecuteContext.java:643)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:349)
at org.jooq.impl.UpdatableRecordImpl.storeMergeOrUpdate0(UpdatableRecordImpl.java:331)
at org.jooq.impl.UpdatableRecordImpl.storeUpdate0(UpdatableRecordImpl.java:228)
at org.jooq.impl.UpdatableRecordImpl.lambda$storeUpdate$1(UpdatableRecordImpl.java:220)
at org.jooq.impl.RecordDelegate.operate(RecordDelegate.java:143)
at org.jooq.impl.UpdatableRecordImpl.storeUpdate(UpdatableRecordImpl.java:219)
at org.jooq.impl.UpdatableRecordImpl.update(UpdatableRecordImpl.java:156)
at org.jooq.impl.UpdatableRecordImpl.update(UpdatableRecordImpl.java:151)
at worker.taskprocessors.BulkContactEditWorker.execute(BulkContactEditWorker.java:144)
Example:
var contacts = jooq()
.selectFrom(
RELATION
.join(BULK_CONTACT_EDIT_CONTACTS)
.on(
BULK_CONTACT_EDIT_CONTACTS
.CONTACT_ID
.eq(RELATION.ID)
.and(BULK_CONTACT_EDIT_CONTACTS.BULK_CONTACT_EDIT_ID.eq(bulkContactEditId))
.and(BULK_CONTACT_EDIT_CONTACTS.PROCESSED.isFalse())))
.limit(batchSize)
.fetchInto(RELATION);
if (!JPA.em().getTransaction().isActive()) {
JPA.em().getTransaction().begin();
}
for (RelationRecord contact : contacts) {
contact.attach(jooq().configuration()); // I have to add this line to make it work.
contact.setOrganizationName("OrganizationName");
contact.update();
JPA.em().getTransaction().commit();
if (!JPA.em().getTransaction().isActive()) {
JPA.em().getTransaction().begin();
}
}
What's the problem
You're fetching your jOOQ RelationRecord values outside of a JPA transactional context, so that fetching runs in its own transaction (this is independent of jOOQ). jOOQ will always try to acquire() and release() a connection for every query, and when that happens outside of a transactional context, then the connection will be effectively closed (e.g. returned to the connection pool). You should switch the order of starting a transaction and running the jOOQ query:
// start transaction
// run jOOQ query
// iterate jOOQ results
// run updates
// commit transaction
If you're using declarative transactions (e.g. Spring's #Transactional annotation), then you're less likely to run into this problem as your initial jOOQ query will more likely also be in the same transaction already.
Why did attaching work
When you explicitly attach the RelationRecord to the configuration, then you will attach a record that has been attached to the previously closed connection to the new configuration with the transactional connection. I'm assuming that your jooq() method produces a new DSLContext instance, which wraps the currently active DataSource or Connection.
Switch to bulk updates
However, if your example is all there is to your actual logic (i.e. you haven't simplified it for this Stack Overflow question), then why not just run a bulk update? It will be simpler and much faster.
jooq()
.update(RELATION)
.set(RELATION.ORGANIZATION_NAME, "OrganizationName")
.where(RELATION.ID.in(
select(BULK_CONTACT_EDIT_CONTACTS.CONTACT_ID)
.from(BULK_CONTACT_EDIT_CONTACTS)
.where(BULK_CONTACT_EDIT_CONTACTS.BULK_CONTACT_EDIT_ID.eq(bulkContactEditId))
.and(BULK_CONTACT_EDIT_CONTACTS.PROCESSED.isFalse())
))
// Since you're using MySQL, you have native UPDATE .. LIMIT support
.limit(batchSize)
.execute();
I'm assuming your actual example is a bit more complex, because you need to set the PROCESSED flag to true somewhere, too, but it's always good to keep this option in mind.

JdbcTemplate pooling and performance

I am working on a project where we are optimizing a legacy codebase that used boilerplate jdbc code and in our new project we use springs jdbcTemplate. I've found the query times in the legacy code are twice as fast and am curious if its jdbcTemplate that's at fault or something else...
We use apache commons BasicDataSource (to provide the pooling). My question is, I'm not too sure if the pooling is actually working correctly. Below are my configurations...
data source
Wiring of the datasource
To analyze this, I start the application and wire up all the jdbc stuff and simply run the same query 100 times. I am also using log4j to get some metrics around the actual performance. 1 line will print the actual jdbc call time, and I have an additional wrapper around the entire jdbcTemplate call to see how long the entire thing takes (shown below)...
Edit: Adding image of RowMapper
Below shows what my logs look like...
DEBUG org.springframework.jdbc.core.JdbcTemplate - Executing prepared SQL query
DEBUG org.springframework.jdbc.core.JdbcTemplate - Executing prepared SQL statement [my select query]
DEBUG org.springframework.jdbc.datasource.DataSourceUtils - Fetching JDBC
Connection from DataSource
DEBUG com.custom.frameworkx.spring.datasource.DebugDataSource - before executeQuery() sql=my select query
DEBUG com.custom.frameworkx.spring.datasource.DebugDataSource - after executeQuery() [time=30ms] sql=my select query
DEBUG com.custom.frameworkx.spring.datasource.DebugDataSource -
DebugResultSet.close()
DEBUG org.springframework.jdbc.datasource.DataSourceUtils - Returning JDBC Connection to DataSource
Full jdbcTemplate call time: 119
Notice how the actual query time takes 30 milliseconds. yet the entire jdbcTemplate call takes 119 milliseconds. I assume this is from jdbcTemplate overhead like adding + releasing resources so I guess this might be acceptable but in the legacy version of this code the entire connection creation+query+resource releasing is still twice as fast.
While debugging through org.springframework.jdbc.datasource.DataSourceUtils.java, I can see the code goes to line 328, which to me doesn't really look like its returning the connection to the pool for re-use later, is this the expected behavior of springs jdbcTemplate? To me it looks like if pooling was working it should hit line 323.

H2 - Tomcat jdbc connection pool not reclaiming connections once it hits the max limit

Problem Statement
We have been using H2 in embedded mode for a while now. It has a connection pool configured above it. Following is the current pool configuration:
h2.datasource.min-idle=10
h2.datasource.initial-size=10
h2.datasource.max-active=200
h2.datasource.max-age=600000
h2.datasource.max-wait=3000
h2.datasource.min-evictable-idle-time-millis=60000
h2.datasource.remove-abandoned=true
h2.datasource.remove-abandoned-timeout=60
h2.datasource.log-abandoned=true
h2.datasource.abandonWhenPercentageFull=100
H2 config:
spring.h2.console.enabled=true
spring.h2.console.path=/h2
h2.datasource.url=jdbc:h2:file:~/h2/cartdb
h2.server.properties=webAllowOthers
spring.h2.console.settings.web-allow-others=true
h2.datasource.driver-class-name=org.h2.Driver
*skipping username and password properties.
We have verified that the above configuration takes effect by logging the pool properties.
The issue with this setup is that we are observing regular(though intermittent) connection pool exhaustion and once the pool hits the max limit it starts throwing the following exception for some queries.
SqlExceptionHelper.logExceptions(SqlExceptionHelper.java:129) - [http-apr-8080-exec-38] Timeout: Pool empty. Unable to fetch a connection in 3 seconds, none available[size:200; busy:200; idle:0; lastwait:3000].
And thereafter it fails to recover from this state even after many hours until we restart the web server(tomcat in this case).
H2 driver dependency:
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<version>1.4.196</version>
<scope>runtime</scope>
</dependency>
Query Pattern & Throughput
We use h2 to load up some data for every request, then execute a few(about 50) SELECT queries and finally delete the data. This results into consistent 30k-40k calls per minute(except off hours) on h2(according to new relic monitoring).
Every read operation acquires a new connection and releases the same after execution.
EntityManager entityManager = null;
try {
entityManager = entityManagerFactory.createEntityManager();
Query query = entityManager.createNativeQuery(sqlQuery);
query.setParameter("cartId", cartId);
List<String> resultList = query.getResultList();
return resultList;
} finally {
if(null != entityManager) { entityManager.close(); }
}
Observations
After application restart the pool utilization is minimal until at one moment when the pool utilization abruptly shoots up and eventually reaches max limit. This happens over the course of 1-2 days.
Once the pool hits the maximum connection limit, the borrowed connection count increases at a faster pace as compared the the returned connection count which otherwise remains very close to one another.
At the same time the abandoned connection count also starts increasing along with the abandon logs.
Interestingly the query response times remains the same after pool exhaustion. So this kind of rules out slow query.
This issue has happened at even the oddest of the hours when the traffic is minimum. So it has no relation to the traffic.
Please guide us in the right direction to solve this issue.
UPDATE
Recently we discovered the following causes in our stack trace when one such incident occured:
Caused by: org.h2.jdbc.JdbcSQLException: Database may be already in
use: null. Possible solutions: close all other connection(s); use the
server mode [90020-196]
Caused by: java.lang.IllegalStateException:The file is locked:
nio:/root/h2/cartdb.mv.db [1.4.196/7]
Caused by: java.nio.channels.OverlappingFileLockException
So after digging into this we have decided to move to in-memory mode as we don't require to persist the data beyond the application's life time. As a result, the file lock should not occur thereby reducing or eradicating this issue.
Will come back and update in either case.
Since the last update on the question:
After observing the performance for quite some time we have come to the conclusion that using H2 in file-mode(embedded) was somehow leading to file lock exceptions periodically(though irregular).
Since our application does not require to persist the data beyond the application's lifetime, we decided to move to pure in-memory mode.
Though the file lock exception's mystery still needs to disclosed.

Unable to Isolate Transactions Across Tiers in Postgres / JDBC

I'm working on a Java project that incorporates a PostgresSQL 9.0 database tier, using JDBC. SQL is wrapped in functions, executed in Java like stored procedures using JDBC.
The database requires a header-detail scheme, with detail records requiring the foreign-key ID to the header. Thus, the header row is written first, then a couple thousand detail records. I need to prevent the user from accessing the header until the details have completed writing.
You may suggest wrapping the entire transaction so that the header record cannot be committed until the detail records have completed writing. However, you can see below that I've isolated the transactions to calls in Java: write header, then loop thru details (while writing detail rows). Due the the sheer size of the data, it is not feasible to pass the detailed data to the function to perform one transaction.
My question is: how do I wrap the transaction at the JDBC level, so that the header is not committed until the detail records have finished writing?
The best solution metaphor would be SQL Server's named transaction's, where the transaction could be started in the data-access layer code (outside other transactions), and completed in a later DB call.
The following (simplified) code executes without error, but doesn't resolve the isolation problem:
DatabaseManager mgr = DatabaseManager.getInstance();
Connection conn = mgr.getConnection();
CallableStatement proc = null;
conn.setAutoCommit(false);
proc = conn.prepareCall("BEGIN TRANSACTION");
proc.execute();
//Write header details
writeHeader(....);
for(Fault fault : faultList) {
writeFault(fault, buno, rsmTime, dnld, faultType, verbose);
}
proc = conn.prepareCall("COMMIT TRANSACTION");
proc.execute();
Your brilliant answer will be much appreciated!
Are you using the same connection for writeHeader and writeFault?
conn.setAutoCommit(false);
headerProc = conn.prepareCall("headerProc...");
headerProc.setString(...);
headerProc.execute();
detailProc = conn.prepareCall("detailProc...");
for(Fault fault : faultList) {
detailProc.setString(...);
detailProc.execute();
detailProc.clearParameters();
}
conn.commit();
And then you should really look at "addBatch" for that detail loop.
While it seems you've solved your immediate issue, you may want to look into JTA if you're running inside a Java EE container. JTA combined with EJB3.1* lets you do declarative transaction control and greatly simplifies transaction management in my experience.
*Don't worry, EJB3.1 is much simpler and cleaner and less horrid than prior EJB specs.

spring ibatis mysql intermittent asynchronous problem

I'm using ibatis in spring to write to mysql.
I have an intermittent bug. On each cycle of a process I write two rows to the db. The next cycle I read in the rows from the previous cycle. Sometimes (one time in 30, sometimes more frequently, sometimes less) I only get back one row from the db.
I have turned off all caching I can think of. My sqlmap-config.xml just says:
<sqlMapConfig>
<settings enhancementEnabled="false" statementCachingEnabled="false" classInfoCacheEnabled="false"/>
<sqlMap resource="ibatis/model/cognitura_core.xml"/>
Is there some asynchrony, or else caching to spring or ibatis or the mysql driver that I'm missing?
Using spring 3.0.5, mybatis 2.3.5, mysql-connector-java 5.0.5
EDIT 1:
Could it be because I'm using a pool of connections (c3p0)? Is it possible the insert is still running when I'm reading. It's weird, though, I thought everything would be occuring synchronously unless I explicitly declared asynch?
Are you calling SqlSession.commit() after the inserts? C3P0 asynchronously "closes" the connections, which may be calling commit under the covers. That could explain the behavior you are seeing.
I'm getting similar behavior. This is what I'm doing. I have an old version of IBATIS I don't plan on upgrading. You can easily move this into a decorator.
SqlMapSession session = client.openSession();
try {
try {
session.startTransaction();
// do work
session.commitTransaction();
// The transaction should be committed now, but it doesn't always happen.
session.getCurrentConnection().commit(); // Commit again :/
} finally {
session.endTransaction();
}
} finally {
session.close(); // would be nice if it was 'AutoCloseable'
}

Categories