Postgres - ERROR: prepared statement "S_1" already exists - java

When executing batch queries via JDBC to pgbouncer, I get the following error:
org.postgresql.util.PSQLException: ERROR: prepared statement "S_1" already exists
I've found bug reports around the web, but they all seem to deal with Postgres 8.3 or below, whereas we're working with Postgres 9.
Here's the code that triggers the error:
this.getJdbcTemplate().update("delete from xx where username = ?", username);
this.getJdbcTemplate().batchUpdate( "INSERT INTO xx(a, b, c, d, e) " +
"VALUES (?, ?, ?, ?, ?)", new BatchPreparedStatementSetter() {
#Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
ps.setString(1, value1);
ps.setString(2, value2);
ps.setString(3, value3);
ps.setString(4, value4);
ps.setBoolean(5, value5);
}
#Override
public int getBatchSize() {
return something();
}
});
Anyone seen this before?
Edit 1:
This turned out to be a pgBouncer issue that occurs when using anything other than session pooling. We were using transaction pooling, which apparently can't support prepared statements. By switching to session pooling, we got around the issue.
Unfortunately, this isn't a good fix for our use case. We have two separate uses for pgBouncer: one part of our system does bulk updates which are most efficient as prepared statements, and another part needs many connections in very rapid succession. Since pgBouncer doesn't allow switching back and forth between session pooling and transaction pooling, we're forced to run two separate instances on different ports just to support our needs.
Edit 2:
I ran across this link, where the poster has rolled a patch of his own. We're currently looking at implementing it for our own uses if it proves to be safe and effective.

Disabling prepared statements in JDBC.
The proper way to do it for JDBC is adding "prepareThreshold=0" parameter to connect string.
jdbc:postgresql://ip:port/db_name?prepareThreshold=0

This turned out to be a pgBouncer issue that occurs when using anything other than session pooling. We were using transaction pooling, which apparently can't support prepared statements. By switching to session pooling, we got around the issue.
Unfortunately, this isn't a good fix for our use case. We have two separate uses for pgBouncer: one part of our system does bulk updates which are most efficient as prepared statements, and another part needs many connections in very rapid succession. Since pgBouncer doesn't allow switching back and forth between session pooling and transaction pooling, we're forced to either run two separate instances on different ports just to support our needs, or to implement this patch. Preliminary testing shows it to work well, but time will tell if it proves to be safe and effective.

New, Better Answer
To discard session state and effectively forget the "S_1" prepared statement, use server_reset_query option in PgBouncer config.
Old Answer
See http://pgbouncer.projects.postgresql.org/doc/faq.html#_how_to_use_prepared_statements_with_transaction_pooling
Switching into session mode is not an ideal solution. Transacion pooling is much more efficient. But for transaction pooling you need stateless DB calls.
I think you have three options:
Disable PS in jdbc driver,
manually deallocate them in your Java code,
configure pgbouncer to discard them on transaction end.
I would try option 1 or option 3 - depending on actual way in which your app uses them.
For more info, read the docs:
http://pgbouncer.projects.postgresql.org/doc/config.html (search for server_reset_query),
or google for this:
postgresql jdbc +preparethreshold

In our case the issue was not related to pgbouncer. Since we did were not able to append prepareThreshold=0 to the URL what we did to fix was the following
View the prepared statements
select * from pg_prepared_statements;
Deallocate the faulty table
select * from pg_prepared_statements;
deallocate "S_1";

I had this problem, we have pgbouncer configurated in transaction level, we were using psql 11.8, we just upgraded the psql jar to the latest version, it got fixed.

Related

PreparedStatement, DDL change doesn't change ResultSet on Db2

I have a Java-based application where I issue a query that uses a PreparedStatement. These prepared statements are cached in my connection implementation layer and are later discarded based on an eviction routine.
The issue I have stumbled onto is that if I generated a PreparedStatement with the following query:
SELECT FUNCTION(..) as A, T.* FROM table t WHERE ...
If I later issue an ALTER TABLE table ... statement and the above prepared gets reused, if I add a new column in the ALTER that column isn't visible in the prepared statement's result set. If the statement expires and therefore is closed or if the statement is manually closed after the ALTER and I use a new prepared statement, I get the new column that was added.
I have a few questions:
Can someone explain what's going on as I don't observe this with other vendors?
Is this caching with the PreparedStatement controlled at the driver or database level?
If it's at the driver level, can this behavior be disabled?
UPDATE
We are explicitly connecting to an IBM Db2 11.5 instance on Linux using the following driver:
<dependency>
<groupId>com.ibm.db2</groupId>
<artifactId>jcc</artifactId>
<version>11.5.0.0</version>
</dependency>
If you are using an IBM supplied jdbc driver then the following IBM statement can answer your question, as relates to the IBM type 4 jdbc driver. If you have other layers involved, different answers might apply.
The IBM Data Server Driver for JDBC and SQLJ does not check whether
the definitions of target objects of statements in the internal
statement cache have changed. If you execute SQL data definition
language statements in an application, you need to disable internal
statement caching for that application.
This statement comes from the documentation. In other words, when using ALTER TABLE.... for a table for which a cached preparedStatement already exist, there is no automated invalidation of the cached preparedStatement .
You could disable the preparedStatement caching (by adjusting maxStatements property) or perform the DDL adjustments when the app is not running, or use some cache clearing technique.
It is unclear from your question whether or benchmarking has been done to prove that statement caching is beneficial for the workload.

Closing a PreparedStatement after a single execute – is it a design flaw?

I have looked into various places, and have heard a lot of dubious claims, ranging from PreparedStatement should be preferred over Statement everywhere, even if only for the performance benefit; all the way to claims that PreparedStatements should be used exclusively for batched statements and nothing else.
However, there seems to be a blind spot in (primarily online) discussions I have followed. Let me present a concrete scenario.
We have an EDA-designed application with a DB connection pool. Events come, some of them require persistence, some do not. Some are artificially generated (e.g. update/reset something every X minutes, for example).
Some events come and are handled sequentially, but other types of events (also requiring persistence) can (and will) be handled concurrently.
Aside from those artificially generated events, there is no structure in how events requiring persistence arrive.
This application was designed quite a while ago (roughly 2005) and supports several DBMSes. The typical event handler (where persistence is required):
get connection from pool
prepare sql statement
execute prepared statement
process the result set, if applicable, close it
close prepared statement
prepare a different statement, if necessary and handle the same way
return connection to pool
If an event requires batch processing, the statement is prepared once and addBatch/executeBatch methods are used. This is an obvious performance benefit and these cases are not related to this question.
Recently, I have received an opinion, that the whole idea of preparing (parsing) a statement, executing it once and closing is essentially a misuse of PreparedStatement, provides zero performance benefits, regardless of whether server or client prepared statements are used and that typical DBMSes (Oracle, DB2, MSSQL, MySQL, Derby, etc.) will not even promote such a statement to prepared statement cache (or at least, their default JDBC driver/datasource will not).
Moreover, I had to test certain scenarios in dev environment on MySQL, and it seems that the Connector/J usage analyzer agrees with this idea. For all non-batched prepared statements, calling close() prints:
PreparedStatement created, but used 1 or fewer times. It is more efficient to prepare statements once, and re-use them many times
Due to application design choices outlined earlier, having a PreparedStatement instance cache that holds every single SQL statement used by any event for each connection in the connection pool sounds like a poor choice.
Could someone elaborate further on this? Is the logic "prepare-execute (once)-close" flawed and essentially discouraged?
P.S. Explicitly specifying useUsageAdvisor=true and cachePrepStmts=true for Connector/J and using either useServerPrepStmts=true or useServerPrepStmts=false still results in warnings about efficiency when calling close() on PreparedStatement instances for every non-batched SQL statement.
Is the logic prepare-execute [once]-close flawed and essentially discouraged?
I don't see that as being a problem, per se. A given SQL statement needs to be "prepared" at some point, whether explicitly (with a PreparedStatement) or "on the fly" (with a Statement). There may be a tiny bit more overhead incurred if we use a PreparedStatement instead of a Statement for something that will only be executed once, but it is unlikely that the overhead involved would be significant, especially if the statement you cite is true:
typical DBMSes (Oracle, DB2, MSSQL, MySQL, Derby, etc.) will not even promote such a statement to prepared statement cache (or at least, their default JDBC driver/datasource will not).
What is discouraged is a pattern like this:
for (int thing : thingList) {
PreparedStatement ps = conn.prepareStatement(" {some constant SQL statement} ");
ps.setInt(1, thing);
ps.executeUpdate();
ps.close();
}
because the PreparedStatement is only used once and the same SQL statement is being prepared over and over again. (Although even that might not be such a big deal if the SQL statement and its executation plan are indeed cached.) The better way to do that is
PreparedStatement ps = conn.prepareStatement(" {some constant SQL statement} ");
for (int thing : thingList) {
ps.setInt(1, thing);
ps.executeUpdate();
}
ps.close();
... or even better, with a "try with resources" ...
try (PreparedStatement ps = conn.prepareStatement(" {some constant SQL statement} ")) {
for (int thing : thingList) {
ps.setInt(1, thing);
ps.executeUpdate();
}
}
Note that this is true even without using batch processing. The SQL statement is still only prepared once and used several times.
As others already stated, the most expensive part is the parsing the statement in the database. Some database systems (this is pretty much DB dependent – I will speak mainly for Oracle) may profit, if the statement is already parsed in the shared pool. (In Oracle terminology this is called a soft parse that is cheaper than a hard parse - a parse of a new statement). You can profit from soft parse even if you use the prepared statement only once.
So the important task is to give the database a chance to reuse the statement. A typical counter example is the handling of the IN list based on a collection in Hibernate. You end with the statement such as
.. FROM T WHERE X in (?,?,?, … length based on the size of the collection,?,? ,?,?)
You can’t reuse this statement if the size of the collection differ.
A good starting point to get overview of the spectrum of the SQL queries produced by a running application is (by Oracle) the V$SQL view. Filter the PARSING_SCHEMA_NAME with you connection pool user and check the SQL_TEXT and the EXECUTIONS count.
Two extreme situation should be avoided:
Passing parameters (IDs) in the query text (this is well known) and
Reusing statement for different access paths.
An example of the latter is a query that with a provided parameter performs an index access to a limited part of the table, while without the parameter all records should be processed (full table scan). In that case is definitively no problem to create two different statements (as the parsing of both leads to different execution plans).
PreparedStatements are preferable because one is needed regardless of whether you create one programmatically or not; internally the database creates one every time a query is run - creating one programatically just gives you a handle to it. Creating and throwing away a PreparedStatement every time doesn't add much overhead over using Statement.
A large effort is required by the database to create one (syntax checking, parsing, permissions checking, optimization, access strategy, etc). Reusing one bypasses this effort for subsequent executions.
Instead of throwing them away, try either writing the query in such a way that it can be reused, eg by ignoring null input parameters:
where someCol = coalesce(?, someCol)
so if you set the parameter to null (ie "unspecified), the condition succeeds)
or if you absolutely must build the query every time, keep references to the PreparedStatements in a Map where the built query is the key and reuse them if you get a hit. Use a WeakHashMap<String, PreparedStatements> for you map implementation to prevent running out of memory.
PreparedStatement created, but used 1 or fewer times. It is more efficient to prepare statements once, and re-use them many times
I thing you may safely ignore this warning, it is similar to a claim It is more efficient to work first 40 hour in the week, than sleep next 56 hours, eat following 7 hours and the rest is your free time.
You need exactly one execution per event - should you perform 50 to get a better average?
SQL commands that run only once, in terms of performance, just waste database resources (memory, processing) being sent in a Prepared Statement. In other hand, not using Prepared Statement let app vulnerable to SQL injection.
Are security (protection from SQL injection) working against performance (prepared statement that runs just once) ? Yes, but...
But it should not be that way. It's a choice java does NOT implement an interface to let developers call the right database API: SQL commands that run just once AND are properly protected against SQL injection ! Why Java just not implement the correct tool for this specific task?
It could be as follows:
Statement Interface - Different SQL commands could be submitted. One execution of SQL commands. Bind variables not allowed.
PreparedStatement Interface - One SQL command could be submitted. Multiple executions of SQL command. Bind variables allowed.
(MISSING IN JAVA!) RunOnceStatement - One SQL command could be submitted. One execution of SQL command. Bind variables allowed.
For example, the correct routine (API) could be called in Postgres, by driver mapping to:
- Statement Interface - call PQExec()
- PreparedStatement Interface - call PQPrepare() / PQExecPrepare() / ...
- (MISSING IN JAVA!) RunOnceStatement Interface - call PQExecParams()
Using prepared statement in SQL code that runs just once is a BIG performance problem: more processing in database, waste database memory, by maintaining plans that will not called later. Cache plans get so crowed that actual SQL commands that are executed multiple times could be deleted from cache.
But Java does not implement the correct interface, and forces everybody to use Prepared Statement everywhere, just to protect against SQL injection...

BneBaseSQL.executeQuery: Stack trace: java.sql.SQLException: ORA-01000: maximum open cursors exceeded [duplicate]

I am getting an ORA-01000 SQL exception. So I have some queries related to it.
Are maximum open cursors exactly related to number of JDBC connections, or are they also related to the statement and resultset objects we have created for a single connection ? (We are using pool of connections)
Is there a way to configure the number of statement/resultset objects in the database (like connections) ?
Is it advisable to use instance variable statement/resultset object instead of method local statement/resultset object in a single threaded environment ?
Does executing a prepared statement in a loop cause this issue ? (Of course, I could have used sqlBatch) Note: pStmt is closed once loop is over.
{ //method try starts
String sql = "INSERT into TblName (col1, col2) VALUES(?, ?)";
pStmt = obj.getConnection().prepareStatement(sql);
pStmt.setLong(1, subscriberID);
for (String language : additionalLangs) {
pStmt.setInt(2, Integer.parseInt(language));
pStmt.execute();
}
} //method/try ends
{ //finally starts
pStmt.close()
} //finally ends
What will happen if conn.createStatement() and conn.prepareStatement(sql) are called multiple times on single connection object ?
Edit1:
6. Will the use of Weak/Soft reference statement object help in preventing the leakage ?
Edit2:
1. Is there any way, I can find all the missing "statement.close()"s in my project ? I understand it is not a memory leak. But I need to find a statement reference (where close() is not performed) eligible for garbage collection ? Any tool available ? Or do I have to analyze it manually ?
Please help me understand it.
Solution
To find the opened cursor in Oracle DB for username -VELU
Go to ORACLE machine and start sqlplus as sysdba.
[oracle#db01 ~]$ sqlplus / as sysdba
Then run
SELECT A.VALUE,
S.USERNAME,
S.SID,
S.SERIAL#
FROM V$SESSTAT A,
V$STATNAME B,
V$SESSION S
WHERE A.STATISTIC# = B.STATISTIC#
AND S.SID = A.SID
AND B.NAME = 'opened cursors current'
AND USERNAME = 'VELU';
If possible please read my answer for more understanding of my solution
ORA-01000, the maximum-open-cursors error, is an extremely common error in Oracle database development. In the context of Java, it happens when the application attempts to open more ResultSets than there are configured cursors on a database instance.
Common causes are:
Configuration mistake
You have more threads in your application querying the database than cursors on the DB. One case is where you have a connection and thread pool larger than the number of cursors on the database.
You have many developers or applications connected to the same DB instance (which will probably include many schemas) and together you are using too many connections.
Solution:
Increasing the number of cursors on the database (if resources allow) or
Decreasing the number of threads in the application.
Cursor leak
The applications is not closing ResultSets (in JDBC) or cursors (in stored procedures on the database)
Solution: Cursor leaks are bugs; increasing the number of cursors on the DB simply delays the inevitable failure. Leaks can be found using static code analysis, JDBC or application-level logging, and database monitoring.
Background
This section describes some of the theory behind cursors and how JDBC should be used. If you don't need to know the background, you can skip this and go straight to 'Eliminating Leaks'.
What is a cursor?
A cursor is a resource on the database that holds the state of a query, specifically the position where a reader is in a ResultSet. Each SELECT statement has a cursor, and PL/SQL stored procedures can open and use as many cursors as they require. You can find out more about cursors on Orafaq.
A database instance typically serves several different schemas, many different users each with multiple sessions. To do this, it has a fixed number of cursors available for all schemas, users and sessions. When all cursors are open (in use) and request comes in that requires a new cursor, the request fails with an ORA-010000 error.
Finding and setting the number of cursors
The number is normally configured by the DBA on installation. The number of cursors currently in use, the maximum number and the configuration can be accessed in the Administrator functions in Oracle SQL Developer. From SQL it can be set with:
ALTER SYSTEM SET OPEN_CURSORS=1337 SID='*' SCOPE=BOTH;
Relating JDBC in the JVM to cursors on the DB
The JDBC objects below are tightly coupled to the following database concepts:
JDBC Connection is the client representation of a database session and provides database transactions. A connection can have only a single transaction open at any one time (but transactions can be nested)
A JDBC ResultSet is supported by a single cursor on the database. When close() is called on the ResultSet, the cursor is released.
A JDBC CallableStatement invokes a stored procedure on the database, often written in PL/SQL. The stored procedure can create zero or more cursors, and can return a cursor as a JDBC ResultSet.
JDBC is thread safe: It is quite OK to pass the various JDBC objects between threads.
For example, you can create the connection in one thread; another thread can use this connection to create a PreparedStatement and a third thread can process the result set. The single major restriction is that you cannot have more than one ResultSet open on a single PreparedStatement at any time. See Does Oracle DB support multiple (parallel) operations per connection?
Note that a database commit occurs on a Connection, and so all DML (INSERT, UPDATE and DELETE's) on that connection will commit together. Therefore, if you want to support multiple transactions at the same time, you must have at least one Connection for each concurrent Transaction.
Closing JDBC objects
A typical example of executing a ResultSet is:
Statement stmt = conn.createStatement();
try {
ResultSet rs = stmt.executeQuery( "SELECT FULL_NAME FROM EMP" );
try {
while ( rs.next() ) {
System.out.println( "Name: " + rs.getString("FULL_NAME") );
}
} finally {
try { rs.close(); } catch (Exception ignore) { }
}
} finally {
try { stmt.close(); } catch (Exception ignore) { }
}
Note how the finally clause ignores any exception raised by the close():
If you simply close the ResultSet without the try {} catch {}, it might fail and prevent the Statement being closed
We want to allow any exception raised in the body of the try to propagate to the caller.
If you have a loop over, for example, creating and executing Statements, remember to close each Statement within the loop.
In Java 7, Oracle has introduced the AutoCloseable interface which replaces most of the Java 6 boilerplate with some nice syntactic sugar.
Holding JDBC objects
JDBC objects can be safely held in local variables, object instance and class members. It is generally better practice to:
Use object instance or class members to hold JDBC objects that are reused multiple times over a longer period, such as Connections and PreparedStatements
Use local variables for ResultSets since these are obtained, looped over and then closed typically within the scope of a single function.
There is, however, one exception: If you are using EJBs, or a Servlet/JSP container, you have to follow a strict threading model:
Only the Application Server creates threads (with which it handles incoming requests)
Only the Application Server creates connections (which you obtain from the connection pool)
When saving values (state) between calls, you have to be very careful. Never store values in your own caches or static members - this is not safe across clusters and other weird conditions, and the Application Server may do terrible things to your data. Instead use stateful beans or a database.
In particular, never hold JDBC objects (Connections, ResultSets, PreparedStatements, etc) over different remote invocations - let the Application Server manage this. The Application Server not only provides a connection pool, it also caches your PreparedStatements.
Eliminating leaks
There are a number of processes and tools available for helping detect and eliminating JDBC leaks:
During development - catching bugs early is by far the best approach:
Development practices: Good development practices should reduce the number of bugs in your software before it leaves the developer's desk. Specific practices include:
Pair programming, to educate those without sufficient experience
Code reviews because many eyes are better than one
Unit testing which means you can exercise any and all of your code base from a test tool which makes reproducing leaks trivial
Use existing libraries for connection pooling rather than building your own
Static Code Analysis: Use a tool like the excellent Findbugs to perform a static code analysis. This picks up many places where the close() has not been correctly handled. Findbugs has a plugin for Eclipse, but it also runs standalone for one-offs, has integrations into Jenkins CI and other build tools
At runtime:
Holdability and commit
If the ResultSet holdability is ResultSet.CLOSE_CURSORS_OVER_COMMIT, then the ResultSet is closed when the Connection.commit() method is called. This can be set using Connection.setHoldability() or by using the overloaded Connection.createStatement() method.
Logging at runtime.
Put good log statements in your code. These should be clear and understandable so the customer, support staff and teammates can understand without training. They should be terse and include printing the state/internal values of key variables and attributes so that you can trace processing logic. Good logging is fundamental to debugging applications, especially those that have been deployed.
You can add a debugging JDBC driver to your project (for debugging - don't actually deploy it). One example (I have not used it) is log4jdbc. You then need to do some simple analysis on this file to see which executes don't have a corresponding close. Counting the open and closes should highlight if there is a potential problem
Monitoring the database. Monitor your running application using the tools such as the SQL Developer 'Monitor SQL' function or Quest's TOAD. Monitoring is described in this article. During monitoring, you query the open cursors (eg from table v$sesstat) and review their SQL. If the number of cursors is increasing, and (most importantly) becoming dominated by one identical SQL statement, you know you have a leak with that SQL. Search your code and review.
Other thoughts
Can you use WeakReferences to handle closing connections?
Weak and soft references are ways of allowing you to reference an object in a way that allows the JVM to garbage collect the referent at any time it deems fit (assuming there are no strong reference chains to that object).
If you pass a ReferenceQueue in the constructor to the soft or weak Reference, the object is placed in the ReferenceQueue when the object is GC'ed when it occurs (if it occurs at all). With this approach, you can interact with the object's finalization and you could close or finalize the object at that moment.
Phantom references are a bit weirder; their purpose is only to control finalization, but you can never get a reference to the original object, so it's going to be hard to call the close() method on it.
However, it is rarely a good idea to attempt to control when the GC is run (Weak, Soft and PhantomReferences let you know after the fact that the object is enqueued for GC). In fact, if the amount of memory in the JVM is large (eg -Xmx2000m) you might never GC the object, and you will still experience the ORA-01000. If the JVM memory is small relative to your program's requirements, you may find that the ResultSet and PreparedStatement objects are GCed immediately after creation (before you can read from them), which will likely fail your program.
TL;DR: The weak reference mechanism is not a good way to manage and close Statement and ResultSet objects.
I am adding few more understanding.
Cursor is only about a statement objecct; It is neither resultSet nor the connection object.
But still we have to close the resultset to free some oracle memory. Still if you don't close the resultset that won't be counted for CURSORS.
Closing Statement object will automatically close resultset object too.
Cursor will be created for all the SELECT/INSERT/UPDATE/DELETE statement.
Each ORACLE DB instance can be identified using oracle SID; similarly ORACLE DB can identify each connection using connection SID. Both SID are different.
So ORACLE session is nothing but a jdbc(tcp) connection; which is nothing but one SID.
If we set maximum cursors as 500 then it is only for one JDBC session/connection/SID.
So we can have many JDBC connection with its respective no of cursors (statements).
Once the JVM is terminated all the connections/cursors will be closed, OR JDBCConnection is closed CURSORS with respect to that connection will be closed.
Loggin as sysdba.
In Putty (Oracle login):
[oracle#db01 ~]$ sqlplus / as sysdba
In SqlPlus:
UserName: sys as sysdba
Set session_cached_cursors value to 0 so that it wont have closed cursors.
alter session set session_cached_cursors=0
select * from V$PARAMETER where name='session_cached_cursors'
Select existing OPEN_CURSORS valuse set per connection in DB
SELECT max(a.value) as highest_open_cur, p.value as max_open_cur FROM v$sesstat a, v$statname b, v$parameter p WHERE a.statistic# = b.statistic# AND b.name = 'opened cursors current' AND p.name= 'open_cursors' GROUP BY p.value;
Below is the query to find the SID/connections list with open cursor values.
SELECT a.value, s.username, s.sid, s.serial#
FROM v$sesstat a, v$statname b, v$session s
WHERE a.statistic# = b.statistic# AND s.sid=a.sid
AND b.name = 'opened cursors current' AND username = 'SCHEMA_NAME_IN_CAPS'
Use the below query to identify the sql's in the open cursors
SELECT oc.sql_text, s.sid
FROM v$open_cursor oc, v$session s
WHERE OC.sid = S.sid
AND s.sid=1604
AND OC.USER_NAME ='SCHEMA_NAME_IN_CAPS'
Now debug the Code and Enjoy!!! :)
Correct your Code like this:
try
{ //method try starts
String sql = "INSERT into TblName (col1, col2) VALUES(?, ?)";
pStmt = obj.getConnection().prepareStatement(sql);
pStmt.setLong(1, subscriberID);
for (String language : additionalLangs) {
pStmt.setInt(2, Integer.parseInt(language));
pStmt.execute();
}
} //method/try ends
finally
{ //finally starts
pStmt.close()
}
Are you sure, that you're really closing your pStatements, connections and results?
To analyze open objects you can implment a delegator pattern, which wraps code around your statemant, connection and result objects. So you'll see, if an object will successfully closed.
An Example for: pStmt = obj.getConnection().prepareStatement(sql);
class obj{
public Connection getConnection(){
return new ConnectionDelegator(...here create your connection object and put it into ...);
}
}
class ConnectionDelegator implements Connection{
Connection delegates;
public ConnectionDelegator(Connection con){
this.delegates = con;
}
public Statement prepareStatement(String sql){
return delegates.prepareStatement(sql);
}
public void close(){
try{
delegates.close();
}finally{
log.debug(delegates.toString() + " was closed");
}
}
}
If your application is a Java EE application running on Oracle WebLogic as the application server, a possible cause for this issue is the Statement Cache Size setting in WebLogic.
If the Statement Cache Size setting for a particular data source is about equal to, or greater than, the Oracle database maximum open cursor count setting, then all of the open cursors can be consumed by cached SQL statements that are held open by WebLogic, resulting in the ORA-01000 error.
To address this, reduce the Statement Cache Size setting for each WebLogic datasource that points to the Oracle database to be significantly less than the maximum cursor count setting on the database.
In the WebLogic 10 Admin Console, the Statement Cache Size setting for each data source can be found at Services (left nav) > Data Sources > (individual data source) > Connection Pool tab.
I too had faced this issue.The below exception used to come
java.sql.SQLException: - ORA-01000: maximum open cursors exceeded
I was using Spring Framework with Spring JDBC for dao layer.
My application used to leak cursors somehow and after few minutes or so, It used to give me this exception.
After a lot of thorough debugging and analysis, I found that there was the problem with the Indexing, Primary Key and Unique Constraints in one of the Table being used in the Query i was executing.
My application was trying to update the Columns which were mistakenly Indexed.
So, whenever my application was hitting the update query on the indexed columns, The database was trying to do the reindexing based on the updated values. It was leaking the cursors.
I was able to solve the problem by doing Proper Indexing on the columns which were used to search in the query and applying appropriate constraints wherever required.
I faced the same problem (ORA-01000) today. I had a for loop in the try{}, to execute a SELECT statement in an Oracle DB many times, (each time changing a parameter), and in the finally{} I had my code to close Resultset, PreparedStatement and Connection as usual. But as soon as I reached a specific amount of loops (1000) I got the Oracle error about too many open cursors.
Based on the post by Andrew Alcock above, I made changes so that inside the loop, I closed each resultset and each statement after getting the data and before looping again, and that solved the problem.
Additionaly, the exact same problem occured in another loop of Insert Statements, in another Oracle DB (ORA-01000), this time after 300 statements. Again it was solved in the same way, so either the PreparedStatement or the ResultSet or both, count as open cursors until they are closed.
Did you set autocommit=true? If not try this:
{ //method try starts
String sql = "INSERT into TblName (col1, col2) VALUES(?, ?)";
Connection conn = obj.getConnection()
pStmt = conn.prepareStatement(sql);
for (String language : additionalLangs) {
pStmt.setLong(1, subscriberID);
pStmt.setInt(2, Integer.parseInt(language));
pStmt.execute();
conn.commit();
}
} //method/try ends {
//finally starts
pStmt.close()
} //finally ends
query to find sql that opened.
SELECT s.machine, oc.user_name, oc.sql_text, count(1)
FROM v$open_cursor oc, v$session s
WHERE oc.sid = s.sid
and S.USERNAME='XXXX'
GROUP BY user_name, sql_text, machine
HAVING COUNT(1) > 2
ORDER BY count(1) DESC
This problem mainly happens when you are using connection pooling because when you close connection that connection go back to the connection pool and all cursor associated with that connection never get closed as the connection to database is still open.
So one alternative is to decrease the idle connection time of connections in pool, so may whenever connection sits idle in connection for say 10 sec , connection to database will get closed and new connection created to put in pool.
Using batch processing will result in less overhead. See the following link for examples:
http://www.tutorialspoint.com/jdbc/jdbc-batch-processing.htm
In our case, we were using Hibernate and we had many variables referencing the same Hibernate mapped entity. We were creating and saving these references in a loop. Each reference opened a cursor and kept it open.
We discovered this by using a query to check the number of open cursors while running our code, stepping through with a debugger and selectively commenting things out.
As to why each new reference opened another cursor - the entity in question had collections of other entities mapped to it and I think this had something to do with it (perhaps not just this alone but in combination with how we had configured the fetch mode and cache settings). Hibernate itself has had bugs around failing to close open cursors, though it looks like these have been fixed in later versions.
Since we didn't really need to have so many duplicate references to the same entity anyway, the solution was to stop creating and holding onto all those redundant references. Once we did that the problem when away.
I had this problem with my datasource in WildFly and Tomcat, connecting to a Oracle 10g.
I found that under certain conditions the statement wasn't closed even when the statement.close() was invoked.
The problem was with the Oracle Driver we were using: ojdbc7.jar. This driver is intended for Oracle 12c and 11g, and it seems has some issues when is used with Oracle 10g, so I downgrade to ojdbc5.jar and now everything is running fine.
I faced the same issue because I was querying db for more than 1000 iterations.
I have used try and finally in my code. But was still getting error.
To solve this I just logged into oracle db and ran below query:
ALTER SYSTEM SET open_cursors = 8000 SCOPE=BOTH;
And this solved my problem immediately.
I ran into this issue after setting the prepared statement cache size to a large value. Apparently, when prepared statements are kept in cache, the cursor stays open.

java.sql.SQLException: - ORA-01000: maximum open cursors exceeded

I am getting an ORA-01000 SQL exception. So I have some queries related to it.
Are maximum open cursors exactly related to number of JDBC connections, or are they also related to the statement and resultset objects we have created for a single connection ? (We are using pool of connections)
Is there a way to configure the number of statement/resultset objects in the database (like connections) ?
Is it advisable to use instance variable statement/resultset object instead of method local statement/resultset object in a single threaded environment ?
Does executing a prepared statement in a loop cause this issue ? (Of course, I could have used sqlBatch) Note: pStmt is closed once loop is over.
{ //method try starts
String sql = "INSERT into TblName (col1, col2) VALUES(?, ?)";
pStmt = obj.getConnection().prepareStatement(sql);
pStmt.setLong(1, subscriberID);
for (String language : additionalLangs) {
pStmt.setInt(2, Integer.parseInt(language));
pStmt.execute();
}
} //method/try ends
{ //finally starts
pStmt.close()
} //finally ends
What will happen if conn.createStatement() and conn.prepareStatement(sql) are called multiple times on single connection object ?
Edit1:
6. Will the use of Weak/Soft reference statement object help in preventing the leakage ?
Edit2:
1. Is there any way, I can find all the missing "statement.close()"s in my project ? I understand it is not a memory leak. But I need to find a statement reference (where close() is not performed) eligible for garbage collection ? Any tool available ? Or do I have to analyze it manually ?
Please help me understand it.
Solution
To find the opened cursor in Oracle DB for username -VELU
Go to ORACLE machine and start sqlplus as sysdba.
[oracle#db01 ~]$ sqlplus / as sysdba
Then run
SELECT A.VALUE,
S.USERNAME,
S.SID,
S.SERIAL#
FROM V$SESSTAT A,
V$STATNAME B,
V$SESSION S
WHERE A.STATISTIC# = B.STATISTIC#
AND S.SID = A.SID
AND B.NAME = 'opened cursors current'
AND USERNAME = 'VELU';
If possible please read my answer for more understanding of my solution
ORA-01000, the maximum-open-cursors error, is an extremely common error in Oracle database development. In the context of Java, it happens when the application attempts to open more ResultSets than there are configured cursors on a database instance.
Common causes are:
Configuration mistake
You have more threads in your application querying the database than cursors on the DB. One case is where you have a connection and thread pool larger than the number of cursors on the database.
You have many developers or applications connected to the same DB instance (which will probably include many schemas) and together you are using too many connections.
Solution:
Increasing the number of cursors on the database (if resources allow) or
Decreasing the number of threads in the application.
Cursor leak
The applications is not closing ResultSets (in JDBC) or cursors (in stored procedures on the database)
Solution: Cursor leaks are bugs; increasing the number of cursors on the DB simply delays the inevitable failure. Leaks can be found using static code analysis, JDBC or application-level logging, and database monitoring.
Background
This section describes some of the theory behind cursors and how JDBC should be used. If you don't need to know the background, you can skip this and go straight to 'Eliminating Leaks'.
What is a cursor?
A cursor is a resource on the database that holds the state of a query, specifically the position where a reader is in a ResultSet. Each SELECT statement has a cursor, and PL/SQL stored procedures can open and use as many cursors as they require. You can find out more about cursors on Orafaq.
A database instance typically serves several different schemas, many different users each with multiple sessions. To do this, it has a fixed number of cursors available for all schemas, users and sessions. When all cursors are open (in use) and request comes in that requires a new cursor, the request fails with an ORA-010000 error.
Finding and setting the number of cursors
The number is normally configured by the DBA on installation. The number of cursors currently in use, the maximum number and the configuration can be accessed in the Administrator functions in Oracle SQL Developer. From SQL it can be set with:
ALTER SYSTEM SET OPEN_CURSORS=1337 SID='*' SCOPE=BOTH;
Relating JDBC in the JVM to cursors on the DB
The JDBC objects below are tightly coupled to the following database concepts:
JDBC Connection is the client representation of a database session and provides database transactions. A connection can have only a single transaction open at any one time (but transactions can be nested)
A JDBC ResultSet is supported by a single cursor on the database. When close() is called on the ResultSet, the cursor is released.
A JDBC CallableStatement invokes a stored procedure on the database, often written in PL/SQL. The stored procedure can create zero or more cursors, and can return a cursor as a JDBC ResultSet.
JDBC is thread safe: It is quite OK to pass the various JDBC objects between threads.
For example, you can create the connection in one thread; another thread can use this connection to create a PreparedStatement and a third thread can process the result set. The single major restriction is that you cannot have more than one ResultSet open on a single PreparedStatement at any time. See Does Oracle DB support multiple (parallel) operations per connection?
Note that a database commit occurs on a Connection, and so all DML (INSERT, UPDATE and DELETE's) on that connection will commit together. Therefore, if you want to support multiple transactions at the same time, you must have at least one Connection for each concurrent Transaction.
Closing JDBC objects
A typical example of executing a ResultSet is:
Statement stmt = conn.createStatement();
try {
ResultSet rs = stmt.executeQuery( "SELECT FULL_NAME FROM EMP" );
try {
while ( rs.next() ) {
System.out.println( "Name: " + rs.getString("FULL_NAME") );
}
} finally {
try { rs.close(); } catch (Exception ignore) { }
}
} finally {
try { stmt.close(); } catch (Exception ignore) { }
}
Note how the finally clause ignores any exception raised by the close():
If you simply close the ResultSet without the try {} catch {}, it might fail and prevent the Statement being closed
We want to allow any exception raised in the body of the try to propagate to the caller.
If you have a loop over, for example, creating and executing Statements, remember to close each Statement within the loop.
In Java 7, Oracle has introduced the AutoCloseable interface which replaces most of the Java 6 boilerplate with some nice syntactic sugar.
Holding JDBC objects
JDBC objects can be safely held in local variables, object instance and class members. It is generally better practice to:
Use object instance or class members to hold JDBC objects that are reused multiple times over a longer period, such as Connections and PreparedStatements
Use local variables for ResultSets since these are obtained, looped over and then closed typically within the scope of a single function.
There is, however, one exception: If you are using EJBs, or a Servlet/JSP container, you have to follow a strict threading model:
Only the Application Server creates threads (with which it handles incoming requests)
Only the Application Server creates connections (which you obtain from the connection pool)
When saving values (state) between calls, you have to be very careful. Never store values in your own caches or static members - this is not safe across clusters and other weird conditions, and the Application Server may do terrible things to your data. Instead use stateful beans or a database.
In particular, never hold JDBC objects (Connections, ResultSets, PreparedStatements, etc) over different remote invocations - let the Application Server manage this. The Application Server not only provides a connection pool, it also caches your PreparedStatements.
Eliminating leaks
There are a number of processes and tools available for helping detect and eliminating JDBC leaks:
During development - catching bugs early is by far the best approach:
Development practices: Good development practices should reduce the number of bugs in your software before it leaves the developer's desk. Specific practices include:
Pair programming, to educate those without sufficient experience
Code reviews because many eyes are better than one
Unit testing which means you can exercise any and all of your code base from a test tool which makes reproducing leaks trivial
Use existing libraries for connection pooling rather than building your own
Static Code Analysis: Use a tool like the excellent Findbugs to perform a static code analysis. This picks up many places where the close() has not been correctly handled. Findbugs has a plugin for Eclipse, but it also runs standalone for one-offs, has integrations into Jenkins CI and other build tools
At runtime:
Holdability and commit
If the ResultSet holdability is ResultSet.CLOSE_CURSORS_OVER_COMMIT, then the ResultSet is closed when the Connection.commit() method is called. This can be set using Connection.setHoldability() or by using the overloaded Connection.createStatement() method.
Logging at runtime.
Put good log statements in your code. These should be clear and understandable so the customer, support staff and teammates can understand without training. They should be terse and include printing the state/internal values of key variables and attributes so that you can trace processing logic. Good logging is fundamental to debugging applications, especially those that have been deployed.
You can add a debugging JDBC driver to your project (for debugging - don't actually deploy it). One example (I have not used it) is log4jdbc. You then need to do some simple analysis on this file to see which executes don't have a corresponding close. Counting the open and closes should highlight if there is a potential problem
Monitoring the database. Monitor your running application using the tools such as the SQL Developer 'Monitor SQL' function or Quest's TOAD. Monitoring is described in this article. During monitoring, you query the open cursors (eg from table v$sesstat) and review their SQL. If the number of cursors is increasing, and (most importantly) becoming dominated by one identical SQL statement, you know you have a leak with that SQL. Search your code and review.
Other thoughts
Can you use WeakReferences to handle closing connections?
Weak and soft references are ways of allowing you to reference an object in a way that allows the JVM to garbage collect the referent at any time it deems fit (assuming there are no strong reference chains to that object).
If you pass a ReferenceQueue in the constructor to the soft or weak Reference, the object is placed in the ReferenceQueue when the object is GC'ed when it occurs (if it occurs at all). With this approach, you can interact with the object's finalization and you could close or finalize the object at that moment.
Phantom references are a bit weirder; their purpose is only to control finalization, but you can never get a reference to the original object, so it's going to be hard to call the close() method on it.
However, it is rarely a good idea to attempt to control when the GC is run (Weak, Soft and PhantomReferences let you know after the fact that the object is enqueued for GC). In fact, if the amount of memory in the JVM is large (eg -Xmx2000m) you might never GC the object, and you will still experience the ORA-01000. If the JVM memory is small relative to your program's requirements, you may find that the ResultSet and PreparedStatement objects are GCed immediately after creation (before you can read from them), which will likely fail your program.
TL;DR: The weak reference mechanism is not a good way to manage and close Statement and ResultSet objects.
I am adding few more understanding.
Cursor is only about a statement objecct; It is neither resultSet nor the connection object.
But still we have to close the resultset to free some oracle memory. Still if you don't close the resultset that won't be counted for CURSORS.
Closing Statement object will automatically close resultset object too.
Cursor will be created for all the SELECT/INSERT/UPDATE/DELETE statement.
Each ORACLE DB instance can be identified using oracle SID; similarly ORACLE DB can identify each connection using connection SID. Both SID are different.
So ORACLE session is nothing but a jdbc(tcp) connection; which is nothing but one SID.
If we set maximum cursors as 500 then it is only for one JDBC session/connection/SID.
So we can have many JDBC connection with its respective no of cursors (statements).
Once the JVM is terminated all the connections/cursors will be closed, OR JDBCConnection is closed CURSORS with respect to that connection will be closed.
Loggin as sysdba.
In Putty (Oracle login):
[oracle#db01 ~]$ sqlplus / as sysdba
In SqlPlus:
UserName: sys as sysdba
Set session_cached_cursors value to 0 so that it wont have closed cursors.
alter session set session_cached_cursors=0
select * from V$PARAMETER where name='session_cached_cursors'
Select existing OPEN_CURSORS valuse set per connection in DB
SELECT max(a.value) as highest_open_cur, p.value as max_open_cur FROM v$sesstat a, v$statname b, v$parameter p WHERE a.statistic# = b.statistic# AND b.name = 'opened cursors current' AND p.name= 'open_cursors' GROUP BY p.value;
Below is the query to find the SID/connections list with open cursor values.
SELECT a.value, s.username, s.sid, s.serial#
FROM v$sesstat a, v$statname b, v$session s
WHERE a.statistic# = b.statistic# AND s.sid=a.sid
AND b.name = 'opened cursors current' AND username = 'SCHEMA_NAME_IN_CAPS'
Use the below query to identify the sql's in the open cursors
SELECT oc.sql_text, s.sid
FROM v$open_cursor oc, v$session s
WHERE OC.sid = S.sid
AND s.sid=1604
AND OC.USER_NAME ='SCHEMA_NAME_IN_CAPS'
Now debug the Code and Enjoy!!! :)
Correct your Code like this:
try
{ //method try starts
String sql = "INSERT into TblName (col1, col2) VALUES(?, ?)";
pStmt = obj.getConnection().prepareStatement(sql);
pStmt.setLong(1, subscriberID);
for (String language : additionalLangs) {
pStmt.setInt(2, Integer.parseInt(language));
pStmt.execute();
}
} //method/try ends
finally
{ //finally starts
pStmt.close()
}
Are you sure, that you're really closing your pStatements, connections and results?
To analyze open objects you can implment a delegator pattern, which wraps code around your statemant, connection and result objects. So you'll see, if an object will successfully closed.
An Example for: pStmt = obj.getConnection().prepareStatement(sql);
class obj{
public Connection getConnection(){
return new ConnectionDelegator(...here create your connection object and put it into ...);
}
}
class ConnectionDelegator implements Connection{
Connection delegates;
public ConnectionDelegator(Connection con){
this.delegates = con;
}
public Statement prepareStatement(String sql){
return delegates.prepareStatement(sql);
}
public void close(){
try{
delegates.close();
}finally{
log.debug(delegates.toString() + " was closed");
}
}
}
If your application is a Java EE application running on Oracle WebLogic as the application server, a possible cause for this issue is the Statement Cache Size setting in WebLogic.
If the Statement Cache Size setting for a particular data source is about equal to, or greater than, the Oracle database maximum open cursor count setting, then all of the open cursors can be consumed by cached SQL statements that are held open by WebLogic, resulting in the ORA-01000 error.
To address this, reduce the Statement Cache Size setting for each WebLogic datasource that points to the Oracle database to be significantly less than the maximum cursor count setting on the database.
In the WebLogic 10 Admin Console, the Statement Cache Size setting for each data source can be found at Services (left nav) > Data Sources > (individual data source) > Connection Pool tab.
I too had faced this issue.The below exception used to come
java.sql.SQLException: - ORA-01000: maximum open cursors exceeded
I was using Spring Framework with Spring JDBC for dao layer.
My application used to leak cursors somehow and after few minutes or so, It used to give me this exception.
After a lot of thorough debugging and analysis, I found that there was the problem with the Indexing, Primary Key and Unique Constraints in one of the Table being used in the Query i was executing.
My application was trying to update the Columns which were mistakenly Indexed.
So, whenever my application was hitting the update query on the indexed columns, The database was trying to do the reindexing based on the updated values. It was leaking the cursors.
I was able to solve the problem by doing Proper Indexing on the columns which were used to search in the query and applying appropriate constraints wherever required.
I faced the same problem (ORA-01000) today. I had a for loop in the try{}, to execute a SELECT statement in an Oracle DB many times, (each time changing a parameter), and in the finally{} I had my code to close Resultset, PreparedStatement and Connection as usual. But as soon as I reached a specific amount of loops (1000) I got the Oracle error about too many open cursors.
Based on the post by Andrew Alcock above, I made changes so that inside the loop, I closed each resultset and each statement after getting the data and before looping again, and that solved the problem.
Additionaly, the exact same problem occured in another loop of Insert Statements, in another Oracle DB (ORA-01000), this time after 300 statements. Again it was solved in the same way, so either the PreparedStatement or the ResultSet or both, count as open cursors until they are closed.
Did you set autocommit=true? If not try this:
{ //method try starts
String sql = "INSERT into TblName (col1, col2) VALUES(?, ?)";
Connection conn = obj.getConnection()
pStmt = conn.prepareStatement(sql);
for (String language : additionalLangs) {
pStmt.setLong(1, subscriberID);
pStmt.setInt(2, Integer.parseInt(language));
pStmt.execute();
conn.commit();
}
} //method/try ends {
//finally starts
pStmt.close()
} //finally ends
query to find sql that opened.
SELECT s.machine, oc.user_name, oc.sql_text, count(1)
FROM v$open_cursor oc, v$session s
WHERE oc.sid = s.sid
and S.USERNAME='XXXX'
GROUP BY user_name, sql_text, machine
HAVING COUNT(1) > 2
ORDER BY count(1) DESC
This problem mainly happens when you are using connection pooling because when you close connection that connection go back to the connection pool and all cursor associated with that connection never get closed as the connection to database is still open.
So one alternative is to decrease the idle connection time of connections in pool, so may whenever connection sits idle in connection for say 10 sec , connection to database will get closed and new connection created to put in pool.
Using batch processing will result in less overhead. See the following link for examples:
http://www.tutorialspoint.com/jdbc/jdbc-batch-processing.htm
In our case, we were using Hibernate and we had many variables referencing the same Hibernate mapped entity. We were creating and saving these references in a loop. Each reference opened a cursor and kept it open.
We discovered this by using a query to check the number of open cursors while running our code, stepping through with a debugger and selectively commenting things out.
As to why each new reference opened another cursor - the entity in question had collections of other entities mapped to it and I think this had something to do with it (perhaps not just this alone but in combination with how we had configured the fetch mode and cache settings). Hibernate itself has had bugs around failing to close open cursors, though it looks like these have been fixed in later versions.
Since we didn't really need to have so many duplicate references to the same entity anyway, the solution was to stop creating and holding onto all those redundant references. Once we did that the problem when away.
I had this problem with my datasource in WildFly and Tomcat, connecting to a Oracle 10g.
I found that under certain conditions the statement wasn't closed even when the statement.close() was invoked.
The problem was with the Oracle Driver we were using: ojdbc7.jar. This driver is intended for Oracle 12c and 11g, and it seems has some issues when is used with Oracle 10g, so I downgrade to ojdbc5.jar and now everything is running fine.
I faced the same issue because I was querying db for more than 1000 iterations.
I have used try and finally in my code. But was still getting error.
To solve this I just logged into oracle db and ran below query:
ALTER SYSTEM SET open_cursors = 8000 SCOPE=BOTH;
And this solved my problem immediately.
I ran into this issue after setting the prepared statement cache size to a large value. Apparently, when prepared statements are kept in cache, the cursor stays open.

Logging PreparedStatements in Java

One thing that always been a pain is to log SQL (JDBC) errors when you have a PreparedStatement instead of the query itself.
You always end up with messages like:
2008-10-20 09:19:48,114 ERROR LoggingQueueConsumer-52 [Logger.error:168] Error
executing SQL: [INSERT INTO private_rooms_bans (room_id, name, user_id, msisdn,
nickname) VALUES (?, ?, ?, ?, ?) ON DUPLICATE KEY UPDATE room_id = ?, name = ?,
user_id = ?, msisdn = ?, nickname = ?]
Of course I could write a helper method for retrieving the values and parsing/substitute the question marks with real values (and probably will go down that path if I don't get an outcome of this question), but I just wanted to know if this problem was resolved before by someone else and/or if is there any generic logging helper that would do that automagically for me.
Edited after a few answers:
The libraries provided so far seems to be suitable to logging the statements for debugging, which no doubt is useful. However, I am looking to a way of taking a PreparedStatement itself (not some subclass) and logging its SQL statement whenever an error occur. I wouldn't like to deploy a production app with an alternate implementation of PreparedStatement.
I guess what I am looking for an utility class, not a PreparedStatement specialization.
Thanks!
I tried log4jdbc and it did the job for me.
SECURITY NOTE: As of today August 2011, the logged results of a log4jdbc prepared statement are NOT SAFE to execute. They can be used for analysis, but should NEVER be fed back into a DBMS.
Example of log generated by logjdbc:
2010/08/12 16:30:56 jdbc.sqlonly
org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
8. INSERT INTO A_TABLE
(ID_FILE,CODE1,ID_G,ID_SEQUENCE,REF,NAME,BAR,DRINK_ID,AMOUNT,DESCRIPTION,STATUS,CODE2,REJECT_DESCR,ID_CUST_REJ)
VALUES
(2,'123',1,'2','aa','awe',null,'0123',4317.95,'Rccc','0',null,null,null)
The library is very easy to setup:
My configuration with HSQLDB :
jdbc.url=jdbc:log4jdbc:hsqldb:mem:sample
With Oracle :
jdbc.url=jdbc:log4jdbc:oracle:thin:#mybdd:1521:smt
jdbc.driverClass=net.sf.log4jdbc.DriverSpy
logback.xml :
<logger name="jdbc.sqlonly" level="DEBUG"/>
Too bad it wasn't on a maven repository, but still useful.
From what I tried, if you set
You will only get the statements in error, however, I don't know if this library has an impact on performance.
This is very database-dependent. For example, I understand that some JDBC drivers (e.g. sybase, maybe ms-sql) handle prepared statements by create a temporary stored procedure on the server, and then invoking that procedure with the supplied arguments. So the complete SQL is never actually passed from the client.
As a result, the JDBC API does not expose the information you are after. You may be able to cast your statement objects the internal driver implementation, but probably not - your appserver may well wrap the statements in its own implementation.
I think you may just have to bite the bullet and write your own class which interpolates the arguments into the placeholder SQL. This will be awkward, because you can't ask PreparedStatement for the parameters that have been set, so you'll have to remember them in a helper object, before passing them to the statement.
It seems to me that one of the utility libraries which wrap your driver's implementation objects is the most practical way of doing what you're trying to achieve, but it's going to be unpleasant either way.
Use P6Spy: Its Oracle, Mysql, JNDI, JMX, Spring and Maven friendly. Highly configurable.
Simple and low level integration
Can print the stacktrace.
Can only print heavy calls - time threashold based.
If you are using MySQL, MySQL Connector's PreparedStatement.toString() does include the bound parameters. Though third-party connection pools may break this.
Sub-class PreparedStatement to build up the query string as parameters are added. There's no way to extract the SQL from a PreparedStatement, as it uses a compiled binary form.
LoggedPreparedStatement looks promising, though I haven't tried it.
One advantage of these over a proxy driver that logs all queries is that you can modify the query string before logging it. For example in a PCI environment you might want to mask card numbers.

Categories