I'm working on a Java project that incorporates a PostgresSQL 9.0 database tier, using JDBC. SQL is wrapped in functions, executed in Java like stored procedures using JDBC.
The database requires a header-detail scheme, with detail records requiring the foreign-key ID to the header. Thus, the header row is written first, then a couple thousand detail records. I need to prevent the user from accessing the header until the details have completed writing.
You may suggest wrapping the entire transaction so that the header record cannot be committed until the detail records have completed writing. However, you can see below that I've isolated the transactions to calls in Java: write header, then loop thru details (while writing detail rows). Due the the sheer size of the data, it is not feasible to pass the detailed data to the function to perform one transaction.
My question is: how do I wrap the transaction at the JDBC level, so that the header is not committed until the detail records have finished writing?
The best solution metaphor would be SQL Server's named transaction's, where the transaction could be started in the data-access layer code (outside other transactions), and completed in a later DB call.
The following (simplified) code executes without error, but doesn't resolve the isolation problem:
DatabaseManager mgr = DatabaseManager.getInstance();
Connection conn = mgr.getConnection();
CallableStatement proc = null;
conn.setAutoCommit(false);
proc = conn.prepareCall("BEGIN TRANSACTION");
proc.execute();
//Write header details
writeHeader(....);
for(Fault fault : faultList) {
writeFault(fault, buno, rsmTime, dnld, faultType, verbose);
}
proc = conn.prepareCall("COMMIT TRANSACTION");
proc.execute();
Your brilliant answer will be much appreciated!
Are you using the same connection for writeHeader and writeFault?
conn.setAutoCommit(false);
headerProc = conn.prepareCall("headerProc...");
headerProc.setString(...);
headerProc.execute();
detailProc = conn.prepareCall("detailProc...");
for(Fault fault : faultList) {
detailProc.setString(...);
detailProc.execute();
detailProc.clearParameters();
}
conn.commit();
And then you should really look at "addBatch" for that detail loop.
While it seems you've solved your immediate issue, you may want to look into JTA if you're running inside a Java EE container. JTA combined with EJB3.1* lets you do declarative transaction control and greatly simplifies transaction management in my experience.
*Don't worry, EJB3.1 is much simpler and cleaner and less horrid than prior EJB specs.
Related
I have want to fetch a list of Relation records, and I'll use .fetchInto(RELATION); and I want to iterate over the list and commit each iteration to the database. This doesn't seem to be working for me because I get Caused by: java.sql.SQLException: Connection is closed while updating the record. I don't have this problem with regular jOOQ queries. And it doesn't seem any connections are closed.
When I use contact.attach(jooq().configuration()) it seems to be working again. How can I prevent it from detaching?
I start and commit a transaction through JPA.em().getTransaction().*.
org.jooq.exception.DataAccessException: SQL [update `Relation` set `Relation`.`organizationName` = ? where `Relation`.`id` = ?]; Connection is closed
at org.jooq_3.15.1.MYSQL.debug(Unknown Source)
at org.jooq.impl.Tools.translate(Tools.java:2979)
at org.jooq.impl.DefaultExecuteContext.sqlException(DefaultExecuteContext.java:643)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:349)
at org.jooq.impl.UpdatableRecordImpl.storeMergeOrUpdate0(UpdatableRecordImpl.java:331)
at org.jooq.impl.UpdatableRecordImpl.storeUpdate0(UpdatableRecordImpl.java:228)
at org.jooq.impl.UpdatableRecordImpl.lambda$storeUpdate$1(UpdatableRecordImpl.java:220)
at org.jooq.impl.RecordDelegate.operate(RecordDelegate.java:143)
at org.jooq.impl.UpdatableRecordImpl.storeUpdate(UpdatableRecordImpl.java:219)
at org.jooq.impl.UpdatableRecordImpl.update(UpdatableRecordImpl.java:156)
at org.jooq.impl.UpdatableRecordImpl.update(UpdatableRecordImpl.java:151)
at worker.taskprocessors.BulkContactEditWorker.execute(BulkContactEditWorker.java:144)
Example:
var contacts = jooq()
.selectFrom(
RELATION
.join(BULK_CONTACT_EDIT_CONTACTS)
.on(
BULK_CONTACT_EDIT_CONTACTS
.CONTACT_ID
.eq(RELATION.ID)
.and(BULK_CONTACT_EDIT_CONTACTS.BULK_CONTACT_EDIT_ID.eq(bulkContactEditId))
.and(BULK_CONTACT_EDIT_CONTACTS.PROCESSED.isFalse())))
.limit(batchSize)
.fetchInto(RELATION);
if (!JPA.em().getTransaction().isActive()) {
JPA.em().getTransaction().begin();
}
for (RelationRecord contact : contacts) {
contact.attach(jooq().configuration()); // I have to add this line to make it work.
contact.setOrganizationName("OrganizationName");
contact.update();
JPA.em().getTransaction().commit();
if (!JPA.em().getTransaction().isActive()) {
JPA.em().getTransaction().begin();
}
}
What's the problem
You're fetching your jOOQ RelationRecord values outside of a JPA transactional context, so that fetching runs in its own transaction (this is independent of jOOQ). jOOQ will always try to acquire() and release() a connection for every query, and when that happens outside of a transactional context, then the connection will be effectively closed (e.g. returned to the connection pool). You should switch the order of starting a transaction and running the jOOQ query:
// start transaction
// run jOOQ query
// iterate jOOQ results
// run updates
// commit transaction
If you're using declarative transactions (e.g. Spring's #Transactional annotation), then you're less likely to run into this problem as your initial jOOQ query will more likely also be in the same transaction already.
Why did attaching work
When you explicitly attach the RelationRecord to the configuration, then you will attach a record that has been attached to the previously closed connection to the new configuration with the transactional connection. I'm assuming that your jooq() method produces a new DSLContext instance, which wraps the currently active DataSource or Connection.
Switch to bulk updates
However, if your example is all there is to your actual logic (i.e. you haven't simplified it for this Stack Overflow question), then why not just run a bulk update? It will be simpler and much faster.
jooq()
.update(RELATION)
.set(RELATION.ORGANIZATION_NAME, "OrganizationName")
.where(RELATION.ID.in(
select(BULK_CONTACT_EDIT_CONTACTS.CONTACT_ID)
.from(BULK_CONTACT_EDIT_CONTACTS)
.where(BULK_CONTACT_EDIT_CONTACTS.BULK_CONTACT_EDIT_ID.eq(bulkContactEditId))
.and(BULK_CONTACT_EDIT_CONTACTS.PROCESSED.isFalse())
))
// Since you're using MySQL, you have native UPDATE .. LIMIT support
.limit(batchSize)
.execute();
I'm assuming your actual example is a bit more complex, because you need to set the PROCESSED flag to true somewhere, too, but it's always good to keep this option in mind.
I'm trying to make a simple SQL Transaction, but unfortunately I can't get it to work right.
What I'm doing right now:
protected Single<SQLConnection> tx() {
return PostgreSQLClient.createShared(getVertx(), SqlUtil.getConnectionData())
.rxGetConnection().map((SQLConnection conn) -> {
conn.rxSetAutoCommit(false);
return conn;
});
}
This should be enough from what I understand from reading the docs?
when I inspect conn I see:
inTransaction = false
isAutoCommit = true
Why is that and what am I doing wrong here?
--
I use the common sql driver (http://vertx.io/docs/vertx-sql-common) with vertx 3.4.1
What you're seeing is the internal state of the connection. The current implementation, controls transactionality using 2 flags:
inTransaction
isAutoCommit
The last one is flipped once you call the method:
conn.rxSetAutoCommit(false);
But this is handled internally as a NOOP. Only when a call is performed after the transaction will be started and the first flag will change.
Keep in mind that this is internal state of the client and can/will change in the future when proper transaction isolation levels are implemented in the async driver for which there is already a pending pull request.
If you want to see it working, basically issue a SQL statement in your code, e.g.:
conn.rxSetAutoCommit(false).rxExecute("SELECT ...")
and if you inspect again you will see that both flags are now true as well there is a running transaction on your server.
I'm trying to implement a simple client-middleware-database architecture, where the client send the request to the middleware, then executes it on the database and finally returns the answer to the client.
To test the system I have to use the tpc-h benchmark, which is just a bench of huge queries the must be executed in order to test the response time and the throughput of the system.
The problem that I'm facing is driving me crazy: The client send 150 separated insert queries to the middleware and then the middleware processes each of them using "executeUpdate", here a piece of my code:
Connection cc = c.getConnection();
Statement s = cc.createStatement();
int r = s.executeUpdate(tmpM.getMessage());
tmpR.add(c.getServerName()+":"+c.getDatabaseName()+": "+ r +" row(s) affected.");
s.close();
cc.close();
If I just print all the queries and I execute them manually with phpPgAdmin and then I check with pgAdmin the number if item inserted result 150 correctly, while if I use my code it doesn't add all of them, but only a part of it.
I did a lot of debugging and it results that all the query are sent to the db (the code is executed 150 times and returns 150 times 1, the correct answer) but the result it is not correct.
Does anyone have any suggestion on how to solve it?
Thank you in advance
-g
Why don't you try using transactions instead of opening/closing a connection for each of the insert statements.
From the Oracle JDBC tutorial:
"A transaction is a set of one or more statements that is executed as a unit,
so either all of the statements are executed, or none of the statements is
executed."
http://download.oracle.com/javase/tutorial/jdbc/basics/transactions.html
I'm using ibatis in spring to write to mysql.
I have an intermittent bug. On each cycle of a process I write two rows to the db. The next cycle I read in the rows from the previous cycle. Sometimes (one time in 30, sometimes more frequently, sometimes less) I only get back one row from the db.
I have turned off all caching I can think of. My sqlmap-config.xml just says:
<sqlMapConfig>
<settings enhancementEnabled="false" statementCachingEnabled="false" classInfoCacheEnabled="false"/>
<sqlMap resource="ibatis/model/cognitura_core.xml"/>
Is there some asynchrony, or else caching to spring or ibatis or the mysql driver that I'm missing?
Using spring 3.0.5, mybatis 2.3.5, mysql-connector-java 5.0.5
EDIT 1:
Could it be because I'm using a pool of connections (c3p0)? Is it possible the insert is still running when I'm reading. It's weird, though, I thought everything would be occuring synchronously unless I explicitly declared asynch?
Are you calling SqlSession.commit() after the inserts? C3P0 asynchronously "closes" the connections, which may be calling commit under the covers. That could explain the behavior you are seeing.
I'm getting similar behavior. This is what I'm doing. I have an old version of IBATIS I don't plan on upgrading. You can easily move this into a decorator.
SqlMapSession session = client.openSession();
try {
try {
session.startTransaction();
// do work
session.commitTransaction();
// The transaction should be committed now, but it doesn't always happen.
session.getCurrentConnection().commit(); // Commit again :/
} finally {
session.endTransaction();
}
} finally {
session.close(); // would be nice if it was 'AutoCloseable'
}
We are using Spring SimpleJdbcCall to call stored procedures in Oracle that return cursors. It looks like SimpleJdbcCall isn't closing the cursors and after a while the max open cursors is exceeded.
ORA-01000: maximum open cursors exceeded ; nested exception is java.sql.SQLException: ORA-01000: maximum open cursors exceeded spring
There are a few other people on forums who've experienced this but seemingly no answers. It looks like me as a bug in the spring/oracle support.
This bug is critical and could impact our future use of Spring JDBC.
Has anybody come across a fix - either tracking the problem to the Spring code or found a workaround that avoids the problem?
We are using Spring 2.5.6.
Here is the new version of the code using SimpleJdbcCall which appears to not be correctly closing the result set that the proc returns via a cursor:
...
SimpleJdbcCall call = new SimpleJdbcCall(dataSource);
Map params = new HashMap();
params.put("remote_user", session.getAttribute("cas_username") );
Map result = call
.withSchemaName("urs")
.withCatalogName("ursWeb")
.withProcedureName("get_roles")
.returningResultSet("rolesCur", new au.edu.une.common.util.ParameterizedMapRowMapper() )
.execute(params);
List roles = (List)result.get("rolesCur")
The older version of the code which doesn't use Spring JDBC doesn't have this problem:
oracleConnection = dataSource.getConnection();
callable = oracleConnection.prepareCall(
"{ call urs.ursweb.get_roles(?, ?) }" );
callable.setString(1, (String)session.getAttribute("cas_username"));
callable.registerOutParameter (2, oracle.jdbc.OracleTypes.CURSOR);
callable.execute();
ResultSet rset = (ResultSet)callable.getObject(2);
... do stuff with the result set
if (rset != null) rset.close(); // Explicitly close the resultset
if (callable != null) callable.close(); //Close the callable
if (oracleConnection != null) oracleConnection.close(); //Close the connection
It would appear that Spring JDBC is NOT calling rset.close(). If I comment out that line in the old code then after load testing we get the same database exception.
After much testing we have fixed this problem. It is a combination of how we were using the spring framework and the oracle client and the oracle DB. We were creating new SimpleJDBCCalls which were using the oracle JDBC client's metadata calls which were returned as cursors which were not being closed and cleaned up. I consider this a bug in the Spring JDBC framework in how it calls metadata but then does not close the cursor. Spring should copy the meta data out of the cursor and close it properly. I haven't bothered opening an jira issue with spring because if you use best practice the bug isn't exhibited.
Tweaking OPEN_CURSORS or any of the other parameters is the wrong way to fix this problem and just delays it from appearing.
We worked around it/fixed it by moving the SimpleJDBCCall into a singleton DAO so there is only one cursor open for each oracle proc that we call. These cursors are open for the lifetime of the app - which I consider a bug. As long as OPEN_CURSORS is larger than the number of SimpleJDBCCall objects then there won't be hassles.
Well, I've got this problem when I was reading BLOBs. Main cause was that I was also updating table and the Statement containing update clause was not closed automatically. Nasty cursorleak eats all free cursors. After explicit call of statement.close() the error disappears.
Moral - always close everything, don't rely on automatic close after disposing Statement.
Just be careful setting OPEN_CURSORS to higher and higher values as there are overheads and it could just be band-aiding over an actual problem/error in your code.
I don't have experience with the Spring side of this but worked on an app where we had many issues with ORA-01000 errors and constantly adjusting OPEN_CURSORS just made the problem go away for a little while ...
I can promise you that it's not Spring. I worked on a Spring 1.x app that went live in 2005 and hasn't leaked a connection since. (WebLogic 9., JDK 5). You aren't closing your resources properly.
Are you using a connection pool? Which app server are you deploying to? Which version of Spring? Oracle? Java? Details, please.
Oracle OPEN_CURSORS is the key alright. We have a small 24x7 app running against Oracle XE with only a few apparently open cursors. We had intermittent max open cursors errors until we set the OPEN_CURSORS initialization value to > 300
The solution is not in Spring, but in Oracle: you need to set the OPEN_CURSORS initialization parameter to some value higher than the default 50.
Oracle -- at least as-of 8i, perhaps it's changed -- would reparse JDBC PreparedStatement objects unless you left them open. This was expensive, and most people end up maintaining a fixed pool of open statements that are resubmitted.
(taking a quick look at the 10i docs, they explicitly note that the OCI driver will cache PreparedStatements, so I'm assuming that the native driver still recreates them each time)