spring ibatis mysql intermittent asynchronous problem - java

I'm using ibatis in spring to write to mysql.
I have an intermittent bug. On each cycle of a process I write two rows to the db. The next cycle I read in the rows from the previous cycle. Sometimes (one time in 30, sometimes more frequently, sometimes less) I only get back one row from the db.
I have turned off all caching I can think of. My sqlmap-config.xml just says:
<sqlMapConfig>
<settings enhancementEnabled="false" statementCachingEnabled="false" classInfoCacheEnabled="false"/>
<sqlMap resource="ibatis/model/cognitura_core.xml"/>
Is there some asynchrony, or else caching to spring or ibatis or the mysql driver that I'm missing?
Using spring 3.0.5, mybatis 2.3.5, mysql-connector-java 5.0.5
EDIT 1:
Could it be because I'm using a pool of connections (c3p0)? Is it possible the insert is still running when I'm reading. It's weird, though, I thought everything would be occuring synchronously unless I explicitly declared asynch?

Are you calling SqlSession.commit() after the inserts? C3P0 asynchronously "closes" the connections, which may be calling commit under the covers. That could explain the behavior you are seeing.

I'm getting similar behavior. This is what I'm doing. I have an old version of IBATIS I don't plan on upgrading. You can easily move this into a decorator.
SqlMapSession session = client.openSession();
try {
try {
session.startTransaction();
// do work
session.commitTransaction();
// The transaction should be committed now, but it doesn't always happen.
session.getCurrentConnection().commit(); // Commit again :/
} finally {
session.endTransaction();
}
} finally {
session.close(); // would be nice if it was 'AutoCloseable'
}

Related

Auto attached records get detached after commit

I have want to fetch a list of Relation records, and I'll use .fetchInto(RELATION); and I want to iterate over the list and commit each iteration to the database. This doesn't seem to be working for me because I get Caused by: java.sql.SQLException: Connection is closed while updating the record. I don't have this problem with regular jOOQ queries. And it doesn't seem any connections are closed.
When I use contact.attach(jooq().configuration()) it seems to be working again. How can I prevent it from detaching?
I start and commit a transaction through JPA.em().getTransaction().*.
org.jooq.exception.DataAccessException: SQL [update `Relation` set `Relation`.`organizationName` = ? where `Relation`.`id` = ?]; Connection is closed
at org.jooq_3.15.1.MYSQL.debug(Unknown Source)
at org.jooq.impl.Tools.translate(Tools.java:2979)
at org.jooq.impl.DefaultExecuteContext.sqlException(DefaultExecuteContext.java:643)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:349)
at org.jooq.impl.UpdatableRecordImpl.storeMergeOrUpdate0(UpdatableRecordImpl.java:331)
at org.jooq.impl.UpdatableRecordImpl.storeUpdate0(UpdatableRecordImpl.java:228)
at org.jooq.impl.UpdatableRecordImpl.lambda$storeUpdate$1(UpdatableRecordImpl.java:220)
at org.jooq.impl.RecordDelegate.operate(RecordDelegate.java:143)
at org.jooq.impl.UpdatableRecordImpl.storeUpdate(UpdatableRecordImpl.java:219)
at org.jooq.impl.UpdatableRecordImpl.update(UpdatableRecordImpl.java:156)
at org.jooq.impl.UpdatableRecordImpl.update(UpdatableRecordImpl.java:151)
at worker.taskprocessors.BulkContactEditWorker.execute(BulkContactEditWorker.java:144)
Example:
var contacts = jooq()
.selectFrom(
RELATION
.join(BULK_CONTACT_EDIT_CONTACTS)
.on(
BULK_CONTACT_EDIT_CONTACTS
.CONTACT_ID
.eq(RELATION.ID)
.and(BULK_CONTACT_EDIT_CONTACTS.BULK_CONTACT_EDIT_ID.eq(bulkContactEditId))
.and(BULK_CONTACT_EDIT_CONTACTS.PROCESSED.isFalse())))
.limit(batchSize)
.fetchInto(RELATION);
if (!JPA.em().getTransaction().isActive()) {
JPA.em().getTransaction().begin();
}
for (RelationRecord contact : contacts) {
contact.attach(jooq().configuration()); // I have to add this line to make it work.
contact.setOrganizationName("OrganizationName");
contact.update();
JPA.em().getTransaction().commit();
if (!JPA.em().getTransaction().isActive()) {
JPA.em().getTransaction().begin();
}
}
What's the problem
You're fetching your jOOQ RelationRecord values outside of a JPA transactional context, so that fetching runs in its own transaction (this is independent of jOOQ). jOOQ will always try to acquire() and release() a connection for every query, and when that happens outside of a transactional context, then the connection will be effectively closed (e.g. returned to the connection pool). You should switch the order of starting a transaction and running the jOOQ query:
// start transaction
// run jOOQ query
// iterate jOOQ results
// run updates
// commit transaction
If you're using declarative transactions (e.g. Spring's #Transactional annotation), then you're less likely to run into this problem as your initial jOOQ query will more likely also be in the same transaction already.
Why did attaching work
When you explicitly attach the RelationRecord to the configuration, then you will attach a record that has been attached to the previously closed connection to the new configuration with the transactional connection. I'm assuming that your jooq() method produces a new DSLContext instance, which wraps the currently active DataSource or Connection.
Switch to bulk updates
However, if your example is all there is to your actual logic (i.e. you haven't simplified it for this Stack Overflow question), then why not just run a bulk update? It will be simpler and much faster.
jooq()
.update(RELATION)
.set(RELATION.ORGANIZATION_NAME, "OrganizationName")
.where(RELATION.ID.in(
select(BULK_CONTACT_EDIT_CONTACTS.CONTACT_ID)
.from(BULK_CONTACT_EDIT_CONTACTS)
.where(BULK_CONTACT_EDIT_CONTACTS.BULK_CONTACT_EDIT_ID.eq(bulkContactEditId))
.and(BULK_CONTACT_EDIT_CONTACTS.PROCESSED.isFalse())
))
// Since you're using MySQL, you have native UPDATE .. LIMIT support
.limit(batchSize)
.execute();
I'm assuming your actual example is a bit more complex, because you need to set the PROCESSED flag to true somewhere, too, but it's always good to keep this option in mind.

How to find status of records loaded when we forcefully intercepting batch execution by stopping mssql database

We are implementing connection or flush retry logic for database.
Auto-commit=true;
RetryPolicy retryPolicy = new RetryPolicy()
.retryOn(DataAccessException.class)
.withMaxRetries(maxRetry)
.withDelay(retryInterval, TimeUnit.SECONDS);
result = Failsafe.with(retryPolicy)
.onFailure(throwable -> LOG.warn("Flush failure, will not retry. {} {}"
, throwable.getClass().getName(), throwable.getMessage()))
.onRetry(throwable -> LOG.warn("Flush failure, will retry. {} {}"
, throwable.getClass().getName(), throwable.getMessage()))
.get(cntx -> {
return batch.execute();
});
we want to intercept storing, updating, inserting, deleting records by stopping mssql db service in backend. At some point even If we got org.jooq.exception.DataAccessException, some of the records in batch (subset of batch) are loaded into db.
Is there any way to find failed and successfully loaded records using jooq api?
The jOOQ API cannot help you here out of the box because such a functionality is definitely out of scope for the relatively low level jOOQ API, which helps you write type safe embedded SQL. It does not make any assumptions about your business logic or infrastructure logic.
Ideally, you will run your own diagnostic here. For example, you already have a BATCHID column which should make it possible to detect which records were inserted/updated with which process. When you re-run the batch, you need to detect that you've already attempted this batch, remember the previous BATCHID, and fetch the IDs of the previous attempt to do whatever needs to be done prior to a re-run.

Vertx PostgreSQL Transactions

I'm trying to make a simple SQL Transaction, but unfortunately I can't get it to work right.
What I'm doing right now:
protected Single<SQLConnection> tx() {
return PostgreSQLClient.createShared(getVertx(), SqlUtil.getConnectionData())
.rxGetConnection().map((SQLConnection conn) -> {
conn.rxSetAutoCommit(false);
return conn;
});
}
This should be enough from what I understand from reading the docs?
when I inspect conn I see:
inTransaction = false
isAutoCommit = true
Why is that and what am I doing wrong here?
--
I use the common sql driver (http://vertx.io/docs/vertx-sql-common) with vertx 3.4.1
What you're seeing is the internal state of the connection. The current implementation, controls transactionality using 2 flags:
inTransaction
isAutoCommit
The last one is flipped once you call the method:
conn.rxSetAutoCommit(false);
But this is handled internally as a NOOP. Only when a call is performed after the transaction will be started and the first flag will change.
Keep in mind that this is internal state of the client and can/will change in the future when proper transaction isolation levels are implemented in the async driver for which there is already a pending pull request.
If you want to see it working, basically issue a SQL statement in your code, e.g.:
conn.rxSetAutoCommit(false).rxExecute("SELECT ...")
and if you inspect again you will see that both flags are now true as well there is a running transaction on your server.

Unable to Isolate Transactions Across Tiers in Postgres / JDBC

I'm working on a Java project that incorporates a PostgresSQL 9.0 database tier, using JDBC. SQL is wrapped in functions, executed in Java like stored procedures using JDBC.
The database requires a header-detail scheme, with detail records requiring the foreign-key ID to the header. Thus, the header row is written first, then a couple thousand detail records. I need to prevent the user from accessing the header until the details have completed writing.
You may suggest wrapping the entire transaction so that the header record cannot be committed until the detail records have completed writing. However, you can see below that I've isolated the transactions to calls in Java: write header, then loop thru details (while writing detail rows). Due the the sheer size of the data, it is not feasible to pass the detailed data to the function to perform one transaction.
My question is: how do I wrap the transaction at the JDBC level, so that the header is not committed until the detail records have finished writing?
The best solution metaphor would be SQL Server's named transaction's, where the transaction could be started in the data-access layer code (outside other transactions), and completed in a later DB call.
The following (simplified) code executes without error, but doesn't resolve the isolation problem:
DatabaseManager mgr = DatabaseManager.getInstance();
Connection conn = mgr.getConnection();
CallableStatement proc = null;
conn.setAutoCommit(false);
proc = conn.prepareCall("BEGIN TRANSACTION");
proc.execute();
//Write header details
writeHeader(....);
for(Fault fault : faultList) {
writeFault(fault, buno, rsmTime, dnld, faultType, verbose);
}
proc = conn.prepareCall("COMMIT TRANSACTION");
proc.execute();
Your brilliant answer will be much appreciated!
Are you using the same connection for writeHeader and writeFault?
conn.setAutoCommit(false);
headerProc = conn.prepareCall("headerProc...");
headerProc.setString(...);
headerProc.execute();
detailProc = conn.prepareCall("detailProc...");
for(Fault fault : faultList) {
detailProc.setString(...);
detailProc.execute();
detailProc.clearParameters();
}
conn.commit();
And then you should really look at "addBatch" for that detail loop.
While it seems you've solved your immediate issue, you may want to look into JTA if you're running inside a Java EE container. JTA combined with EJB3.1* lets you do declarative transaction control and greatly simplifies transaction management in my experience.
*Don't worry, EJB3.1 is much simpler and cleaner and less horrid than prior EJB specs.

ORA-01000: maximum open cursors exceededwhen using Spring SimpleJDBCCall

We are using Spring SimpleJdbcCall to call stored procedures in Oracle that return cursors. It looks like SimpleJdbcCall isn't closing the cursors and after a while the max open cursors is exceeded.
ORA-01000: maximum open cursors exceeded ; nested exception is java.sql.SQLException: ORA-01000: maximum open cursors exceeded spring
There are a few other people on forums who've experienced this but seemingly no answers. It looks like me as a bug in the spring/oracle support.
This bug is critical and could impact our future use of Spring JDBC.
Has anybody come across a fix - either tracking the problem to the Spring code or found a workaround that avoids the problem?
We are using Spring 2.5.6.
Here is the new version of the code using SimpleJdbcCall which appears to not be correctly closing the result set that the proc returns via a cursor:
...
SimpleJdbcCall call = new SimpleJdbcCall(dataSource);
Map params = new HashMap();
params.put("remote_user", session.getAttribute("cas_username") );
Map result = call
.withSchemaName("urs")
.withCatalogName("ursWeb")
.withProcedureName("get_roles")
.returningResultSet("rolesCur", new au.edu.une.common.util.ParameterizedMapRowMapper() )
.execute(params);
List roles = (List)result.get("rolesCur")
The older version of the code which doesn't use Spring JDBC doesn't have this problem:
oracleConnection = dataSource.getConnection();
callable = oracleConnection.prepareCall(
"{ call urs.ursweb.get_roles(?, ?) }" );
callable.setString(1, (String)session.getAttribute("cas_username"));
callable.registerOutParameter (2, oracle.jdbc.OracleTypes.CURSOR);
callable.execute();
ResultSet rset = (ResultSet)callable.getObject(2);
... do stuff with the result set
if (rset != null) rset.close(); // Explicitly close the resultset
if (callable != null) callable.close(); //Close the callable
if (oracleConnection != null) oracleConnection.close(); //Close the connection
It would appear that Spring JDBC is NOT calling rset.close(). If I comment out that line in the old code then after load testing we get the same database exception.
After much testing we have fixed this problem. It is a combination of how we were using the spring framework and the oracle client and the oracle DB. We were creating new SimpleJDBCCalls which were using the oracle JDBC client's metadata calls which were returned as cursors which were not being closed and cleaned up. I consider this a bug in the Spring JDBC framework in how it calls metadata but then does not close the cursor. Spring should copy the meta data out of the cursor and close it properly. I haven't bothered opening an jira issue with spring because if you use best practice the bug isn't exhibited.
Tweaking OPEN_CURSORS or any of the other parameters is the wrong way to fix this problem and just delays it from appearing.
We worked around it/fixed it by moving the SimpleJDBCCall into a singleton DAO so there is only one cursor open for each oracle proc that we call. These cursors are open for the lifetime of the app - which I consider a bug. As long as OPEN_CURSORS is larger than the number of SimpleJDBCCall objects then there won't be hassles.
Well, I've got this problem when I was reading BLOBs. Main cause was that I was also updating table and the Statement containing update clause was not closed automatically. Nasty cursorleak eats all free cursors. After explicit call of statement.close() the error disappears.
Moral - always close everything, don't rely on automatic close after disposing Statement.
Just be careful setting OPEN_CURSORS to higher and higher values as there are overheads and it could just be band-aiding over an actual problem/error in your code.
I don't have experience with the Spring side of this but worked on an app where we had many issues with ORA-01000 errors and constantly adjusting OPEN_CURSORS just made the problem go away for a little while ...
I can promise you that it's not Spring. I worked on a Spring 1.x app that went live in 2005 and hasn't leaked a connection since. (WebLogic 9., JDK 5). You aren't closing your resources properly.
Are you using a connection pool? Which app server are you deploying to? Which version of Spring? Oracle? Java? Details, please.
Oracle OPEN_CURSORS is the key alright. We have a small 24x7 app running against Oracle XE with only a few apparently open cursors. We had intermittent max open cursors errors until we set the OPEN_CURSORS initialization value to > 300
The solution is not in Spring, but in Oracle: you need to set the OPEN_CURSORS initialization parameter to some value higher than the default 50.
Oracle -- at least as-of 8i, perhaps it's changed -- would reparse JDBC PreparedStatement objects unless you left them open. This was expensive, and most people end up maintaining a fixed pool of open statements that are resubmitted.
(taking a quick look at the 10i docs, they explicitly note that the OCI driver will cache PreparedStatements, so I'm assuming that the native driver still recreates them each time)

Categories