JdbcTemplate pooling and performance - java

I am working on a project where we are optimizing a legacy codebase that used boilerplate jdbc code and in our new project we use springs jdbcTemplate. I've found the query times in the legacy code are twice as fast and am curious if its jdbcTemplate that's at fault or something else...
We use apache commons BasicDataSource (to provide the pooling). My question is, I'm not too sure if the pooling is actually working correctly. Below are my configurations...
data source
Wiring of the datasource
To analyze this, I start the application and wire up all the jdbc stuff and simply run the same query 100 times. I am also using log4j to get some metrics around the actual performance. 1 line will print the actual jdbc call time, and I have an additional wrapper around the entire jdbcTemplate call to see how long the entire thing takes (shown below)...
Edit: Adding image of RowMapper
Below shows what my logs look like...
DEBUG org.springframework.jdbc.core.JdbcTemplate - Executing prepared SQL query
DEBUG org.springframework.jdbc.core.JdbcTemplate - Executing prepared SQL statement [my select query]
DEBUG org.springframework.jdbc.datasource.DataSourceUtils - Fetching JDBC
Connection from DataSource
DEBUG com.custom.frameworkx.spring.datasource.DebugDataSource - before executeQuery() sql=my select query
DEBUG com.custom.frameworkx.spring.datasource.DebugDataSource - after executeQuery() [time=30ms] sql=my select query
DEBUG com.custom.frameworkx.spring.datasource.DebugDataSource -
DebugResultSet.close()
DEBUG org.springframework.jdbc.datasource.DataSourceUtils - Returning JDBC Connection to DataSource
Full jdbcTemplate call time: 119
Notice how the actual query time takes 30 milliseconds. yet the entire jdbcTemplate call takes 119 milliseconds. I assume this is from jdbcTemplate overhead like adding + releasing resources so I guess this might be acceptable but in the legacy version of this code the entire connection creation+query+resource releasing is still twice as fast.
While debugging through org.springframework.jdbc.datasource.DataSourceUtils.java, I can see the code goes to line 328, which to me doesn't really look like its returning the connection to the pool for re-use later, is this the expected behavior of springs jdbcTemplate? To me it looks like if pooling was working it should hit line 323.

Related

Auto attached records get detached after commit

I have want to fetch a list of Relation records, and I'll use .fetchInto(RELATION); and I want to iterate over the list and commit each iteration to the database. This doesn't seem to be working for me because I get Caused by: java.sql.SQLException: Connection is closed while updating the record. I don't have this problem with regular jOOQ queries. And it doesn't seem any connections are closed.
When I use contact.attach(jooq().configuration()) it seems to be working again. How can I prevent it from detaching?
I start and commit a transaction through JPA.em().getTransaction().*.
org.jooq.exception.DataAccessException: SQL [update `Relation` set `Relation`.`organizationName` = ? where `Relation`.`id` = ?]; Connection is closed
at org.jooq_3.15.1.MYSQL.debug(Unknown Source)
at org.jooq.impl.Tools.translate(Tools.java:2979)
at org.jooq.impl.DefaultExecuteContext.sqlException(DefaultExecuteContext.java:643)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:349)
at org.jooq.impl.UpdatableRecordImpl.storeMergeOrUpdate0(UpdatableRecordImpl.java:331)
at org.jooq.impl.UpdatableRecordImpl.storeUpdate0(UpdatableRecordImpl.java:228)
at org.jooq.impl.UpdatableRecordImpl.lambda$storeUpdate$1(UpdatableRecordImpl.java:220)
at org.jooq.impl.RecordDelegate.operate(RecordDelegate.java:143)
at org.jooq.impl.UpdatableRecordImpl.storeUpdate(UpdatableRecordImpl.java:219)
at org.jooq.impl.UpdatableRecordImpl.update(UpdatableRecordImpl.java:156)
at org.jooq.impl.UpdatableRecordImpl.update(UpdatableRecordImpl.java:151)
at worker.taskprocessors.BulkContactEditWorker.execute(BulkContactEditWorker.java:144)
Example:
var contacts = jooq()
.selectFrom(
RELATION
.join(BULK_CONTACT_EDIT_CONTACTS)
.on(
BULK_CONTACT_EDIT_CONTACTS
.CONTACT_ID
.eq(RELATION.ID)
.and(BULK_CONTACT_EDIT_CONTACTS.BULK_CONTACT_EDIT_ID.eq(bulkContactEditId))
.and(BULK_CONTACT_EDIT_CONTACTS.PROCESSED.isFalse())))
.limit(batchSize)
.fetchInto(RELATION);
if (!JPA.em().getTransaction().isActive()) {
JPA.em().getTransaction().begin();
}
for (RelationRecord contact : contacts) {
contact.attach(jooq().configuration()); // I have to add this line to make it work.
contact.setOrganizationName("OrganizationName");
contact.update();
JPA.em().getTransaction().commit();
if (!JPA.em().getTransaction().isActive()) {
JPA.em().getTransaction().begin();
}
}
What's the problem
You're fetching your jOOQ RelationRecord values outside of a JPA transactional context, so that fetching runs in its own transaction (this is independent of jOOQ). jOOQ will always try to acquire() and release() a connection for every query, and when that happens outside of a transactional context, then the connection will be effectively closed (e.g. returned to the connection pool). You should switch the order of starting a transaction and running the jOOQ query:
// start transaction
// run jOOQ query
// iterate jOOQ results
// run updates
// commit transaction
If you're using declarative transactions (e.g. Spring's #Transactional annotation), then you're less likely to run into this problem as your initial jOOQ query will more likely also be in the same transaction already.
Why did attaching work
When you explicitly attach the RelationRecord to the configuration, then you will attach a record that has been attached to the previously closed connection to the new configuration with the transactional connection. I'm assuming that your jooq() method produces a new DSLContext instance, which wraps the currently active DataSource or Connection.
Switch to bulk updates
However, if your example is all there is to your actual logic (i.e. you haven't simplified it for this Stack Overflow question), then why not just run a bulk update? It will be simpler and much faster.
jooq()
.update(RELATION)
.set(RELATION.ORGANIZATION_NAME, "OrganizationName")
.where(RELATION.ID.in(
select(BULK_CONTACT_EDIT_CONTACTS.CONTACT_ID)
.from(BULK_CONTACT_EDIT_CONTACTS)
.where(BULK_CONTACT_EDIT_CONTACTS.BULK_CONTACT_EDIT_ID.eq(bulkContactEditId))
.and(BULK_CONTACT_EDIT_CONTACTS.PROCESSED.isFalse())
))
// Since you're using MySQL, you have native UPDATE .. LIMIT support
.limit(batchSize)
.execute();
I'm assuming your actual example is a bit more complex, because you need to set the PROCESSED flag to true somewhere, too, but it's always good to keep this option in mind.

Springboot JPA Repository not releasing Hikari DB Connection

I have a rest API in Springboot using Hikari for connection pooling. Hikari is used with default configurations (10 connections in the pool, 30 sec timeout waiting for a connection). The API itself is very simple
It first makes a JPA repository query to fetch some data from a PostgresDB. This part takes about 15-20milliseconds.
It then sends this data to a remote REST API that is slow and can take upwards of 120seconds.
Once the remote API responds, my API returns the result back to the client. A simplified version is shown below.
public ResponseEntity analyseData(int companyId) {
Company company = companyRepository.findById(companyId);//takes 20ms
Analysis analysis = callRemoteRestAPI(company.data) //takes 120seconds
return ResponseEntity.status(200).body(analysis);
}
The code does not have any #Transactional annotations. I find that the JDBC connection is held for the entire duration of my API (i.e ~ 120s). And hence if we get more than 10 requests they timeout waiting on the hikari connection pool (30s). But strictly speaking my API does not need the connection after the JPA query is done (Step 1 above).
Is there a way to get spring to release this connection immediately after the query instead of holding it till the entire API finishes processing ? Can Spring be configured to get a connection for every JPA request ? That way if I have multiple JPA queries interspersed with very slow operations the server throughput is not affected and it can handle more than 10 concurrent API requests. .
Essentially the problem is caused by the Spring OpenSessionInViewFilter that "binds a Hibernate Session to the thread for the entire processing of the request" This essentially acquires a connection from the pool when the first JPA query is executed and then holds on to it till the request is processed.
This page - https://www.baeldung.com/spring-open-session-in-view provides a clear and concise explanation about this feature. It has its pros and cons and the current opinion seems to be divided on its use.

How can I capture query sent from Java (tomcat) to an Oracle DB?

I have an application written by a 3rd party which uses Java/Tomcat talking to an Oracle 12c (12.2.0.1) DB. In its logs it reports "Error inserting into table" but provides no details. In talking with the author's support staff they indicate it is old code and they have no way to give more detail. They say the application is better supported with MSSQL which we do not support in our shop.
I would like to see what the insert statement going to the Oracle DB looks like, but haven't been able to find it in v$sqltext. As an alternative, I was hoping to find a tool like fiddler to view the outbound traffic on port 1521.
Is there specific tool that would allow trapping this traffic which is not encrypted so I can see the "query" sent and the response coming back from the Oracle DB?
A general sniffer may work, but they generally get a lot of extraneous traffic and require a fair amount of mucking about to find what you want.
Note:
As I mentioned in the comments I am not a Tomcat/Java person. I think I found where the classpath is set. Given the windows batch file below, is the "driver" that needs to be replaced bcprov-jdk16-138.jar?
set PROJLIB=..\..
set JAVA_HOME=%PROJLIB%\jdk\
set libDIR=%PROJLIB%\appserver\webapps\receiver\WEB-INF\lib
set consoleDIR=%PROJLIB%\bin\lib
set endorsedLibDir=%PROJLIB%\appserver\endorsed
set CPATH= %consoleDIR%\console.jar;%libDIR%\ebxml.jar;%libDIR%\commons-io-1.1.jar;%libDIR%\bcprov-jdk16-138.jar;%libDIR%\xercesImpl.jar
set CLASSPATH=%CPATH%
set PATH=%JAVA_HOME%\bin;%SystemRoot%;%SystemRoot%\system32
Additional Notes:
The above file is called setenv.bat.
Regarding trying to capture the SQL from the database, the application is not a windows app, it is an app which accepts data from the network and writes it to the DB. This makes knowing precisely when to start and stop monitoring difficult. It seems to be connected for a very short period. It does seem to be able to read data, but not insert.
Assuming that you are using the Oracle JDBC driver and that you have the ability to replace the JDBC driver in some environment in order to debug the problem, Oracle provides versions of the JDBC driver that can be configured to log the SQL statements that are executed.
An alternative would be to create a servererror trigger in the database that logs the SQL statements that fail. I believe that would require that the SQL statement that is failing is well-formed which isn't guaranteed if the third party app is encountering an error dynamically assembling the statement. If the statement never lands in v$sql that may indicate that it isn't well-formed but it's worth a try.
If you're licensed to use the AWR/ ASH tables, you could also try querying dba_hist_active_sess_history. Oracle samples the active sessions every second. If the failing statement happens to be caught in the sampling, you'd see it there. If this is a typical OLTP application doing single-row inserts, you may need to run through a lot of samples in order to catch an active session with that statement but that may be reasonable.
The simples approach is, if you can localize your database session (using gv$session selecting your connection USERNAME).
Get the SID and SERIAL# of the connection and activate the 10046 trace using the following statement. (substutite SID for session_id and SERIAL# for serial_num)
EXEC DBMS_MONITOR.session_trace_enable(session_id =>271, serial_num=>46473, binds=>TRUE);
Note that you need permissions for both querying gv$session and executing DBMS_MONITOR so DBA access is required to grant them to your user.
Than check the trace file on the database server in folder trace, the trace file has a name such as xe_m005_1336.trc
Grep for the table name, you schould see someting like this I simulated for failed insert on the table my_table
=====================
PARSING IN CURSOR #854854488 len=38 dep=0 uid=104 oct=2 lid=104 tim=380974114197 hv=1259660490 ad='7ff08904d88' sqlid='1ttgvst5j9t6a'
insert into my_table(col1) values(:1 )
END OF STMT
PARSE #854854488:c=0,e=495,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=380974114195
=====================
PARSE ERROR #854854488:len=39 dep=0 uid=104 oct=2 lid=104 tim=380974117361 err=904
insert into my_table(col1) values(:1 )
Note that this is example of an exception
java.sql.SQLSyntaxErrorException: ORA-00904: "COL1": invalid identifier
so the statement fails with an PARSE ERROR
If the insert fails due to some constraint vialotation, you will see such sequence
=====================
PARSING IN CURSOR #715594288 len=37 dep=0 uid=104 oct=2 lid=104 tim=382407621534 hv=3290870806 ad='7ff0032e238' sqlid='17t3q0v22dd0q'
insert into my_table(col) values(:1 )
END OF STMT
PARSE #715594288:c=0,e=245,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=382407621532
=====================
The cursor id is #715594288so check with this id further in the trace file
BINDS #715594288:
Bind#0
oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1000000 frm=01 csi=873 siz=24 off=0
kxsbbbfp=2aa71a00 bln=22 avl=02 flg=05
value=7
=====================
Here you see the bind variables passed in the insert, it was the value = 7 that caused the failure.
EXEC #715594288:c=0,e=4614,p=0,cr=7,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=382407626259
ERROR #715594288:err=2290 tim=382407626283
The statement failed in the execution with exception such as
java.sql.SQLIntegrityConstraintViolationException: ORA-02290: check constraint (XXXX.SYS_C0012357) violated
Check the documentation for further details
If you have db access via sqldeveloper....
Go to reports tab, then drill down through data dictionary, database administration, sessions, and finally sessions.
In that view, look for your app's active module(s) and look at the Active SQL tab.
One of them should have your insert statement....
This might help as well...
https://docs.oracle.com/cd/E17781_01/server.112/e18804/monitoring.htm#ADMQS252
The ultimate approach is to trace the JDBC connection on the client. Please find the full documentation here
In the first step you must get the logging JDBC driver on the CLASSPATH. The logging driver has a suggix _g in the name, e.g. ojdbc8_g.jar if you use ojdbc8.jar
The driver can be found in the Oracle installation in the folder jdbc/lib/
Further you must define a properties file say jdbcLogging.properties with following content
.level=SEVERE
oracle.jdbc.level=ALL
oracle.jdbc.handlers=java.util.logging.ConsoleHandler
java.util.logging.ConsoleHandler.level=FINE
java.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter
Finally when you run the Java application you must define two properties
java -Doracle.jdbc.Trace=true -Djava.util.logging.config.file=jdbcLogging.properties ...
This will produce a trace file om the error output where you can find the executed statements.
Example
INFO: DRCP Enabled: false
Mar 23, 2021 10:40:31 PM oracle.jdbc.driver.OracleStatement logSQL
CONFIG: BAB2F1 SQL: insert into my_table(col1) values(?)
What I ended up doing was downloading WireShark, a sniffer, and monitored the TCP/IP packets.

Unable to Isolate Transactions Across Tiers in Postgres / JDBC

I'm working on a Java project that incorporates a PostgresSQL 9.0 database tier, using JDBC. SQL is wrapped in functions, executed in Java like stored procedures using JDBC.
The database requires a header-detail scheme, with detail records requiring the foreign-key ID to the header. Thus, the header row is written first, then a couple thousand detail records. I need to prevent the user from accessing the header until the details have completed writing.
You may suggest wrapping the entire transaction so that the header record cannot be committed until the detail records have completed writing. However, you can see below that I've isolated the transactions to calls in Java: write header, then loop thru details (while writing detail rows). Due the the sheer size of the data, it is not feasible to pass the detailed data to the function to perform one transaction.
My question is: how do I wrap the transaction at the JDBC level, so that the header is not committed until the detail records have finished writing?
The best solution metaphor would be SQL Server's named transaction's, where the transaction could be started in the data-access layer code (outside other transactions), and completed in a later DB call.
The following (simplified) code executes without error, but doesn't resolve the isolation problem:
DatabaseManager mgr = DatabaseManager.getInstance();
Connection conn = mgr.getConnection();
CallableStatement proc = null;
conn.setAutoCommit(false);
proc = conn.prepareCall("BEGIN TRANSACTION");
proc.execute();
//Write header details
writeHeader(....);
for(Fault fault : faultList) {
writeFault(fault, buno, rsmTime, dnld, faultType, verbose);
}
proc = conn.prepareCall("COMMIT TRANSACTION");
proc.execute();
Your brilliant answer will be much appreciated!
Are you using the same connection for writeHeader and writeFault?
conn.setAutoCommit(false);
headerProc = conn.prepareCall("headerProc...");
headerProc.setString(...);
headerProc.execute();
detailProc = conn.prepareCall("detailProc...");
for(Fault fault : faultList) {
detailProc.setString(...);
detailProc.execute();
detailProc.clearParameters();
}
conn.commit();
And then you should really look at "addBatch" for that detail loop.
While it seems you've solved your immediate issue, you may want to look into JTA if you're running inside a Java EE container. JTA combined with EJB3.1* lets you do declarative transaction control and greatly simplifies transaction management in my experience.
*Don't worry, EJB3.1 is much simpler and cleaner and less horrid than prior EJB specs.

ORA-01000: maximum open cursors exceededwhen using Spring SimpleJDBCCall

We are using Spring SimpleJdbcCall to call stored procedures in Oracle that return cursors. It looks like SimpleJdbcCall isn't closing the cursors and after a while the max open cursors is exceeded.
ORA-01000: maximum open cursors exceeded ; nested exception is java.sql.SQLException: ORA-01000: maximum open cursors exceeded spring
There are a few other people on forums who've experienced this but seemingly no answers. It looks like me as a bug in the spring/oracle support.
This bug is critical and could impact our future use of Spring JDBC.
Has anybody come across a fix - either tracking the problem to the Spring code or found a workaround that avoids the problem?
We are using Spring 2.5.6.
Here is the new version of the code using SimpleJdbcCall which appears to not be correctly closing the result set that the proc returns via a cursor:
...
SimpleJdbcCall call = new SimpleJdbcCall(dataSource);
Map params = new HashMap();
params.put("remote_user", session.getAttribute("cas_username") );
Map result = call
.withSchemaName("urs")
.withCatalogName("ursWeb")
.withProcedureName("get_roles")
.returningResultSet("rolesCur", new au.edu.une.common.util.ParameterizedMapRowMapper() )
.execute(params);
List roles = (List)result.get("rolesCur")
The older version of the code which doesn't use Spring JDBC doesn't have this problem:
oracleConnection = dataSource.getConnection();
callable = oracleConnection.prepareCall(
"{ call urs.ursweb.get_roles(?, ?) }" );
callable.setString(1, (String)session.getAttribute("cas_username"));
callable.registerOutParameter (2, oracle.jdbc.OracleTypes.CURSOR);
callable.execute();
ResultSet rset = (ResultSet)callable.getObject(2);
... do stuff with the result set
if (rset != null) rset.close(); // Explicitly close the resultset
if (callable != null) callable.close(); //Close the callable
if (oracleConnection != null) oracleConnection.close(); //Close the connection
It would appear that Spring JDBC is NOT calling rset.close(). If I comment out that line in the old code then after load testing we get the same database exception.
After much testing we have fixed this problem. It is a combination of how we were using the spring framework and the oracle client and the oracle DB. We were creating new SimpleJDBCCalls which were using the oracle JDBC client's metadata calls which were returned as cursors which were not being closed and cleaned up. I consider this a bug in the Spring JDBC framework in how it calls metadata but then does not close the cursor. Spring should copy the meta data out of the cursor and close it properly. I haven't bothered opening an jira issue with spring because if you use best practice the bug isn't exhibited.
Tweaking OPEN_CURSORS or any of the other parameters is the wrong way to fix this problem and just delays it from appearing.
We worked around it/fixed it by moving the SimpleJDBCCall into a singleton DAO so there is only one cursor open for each oracle proc that we call. These cursors are open for the lifetime of the app - which I consider a bug. As long as OPEN_CURSORS is larger than the number of SimpleJDBCCall objects then there won't be hassles.
Well, I've got this problem when I was reading BLOBs. Main cause was that I was also updating table and the Statement containing update clause was not closed automatically. Nasty cursorleak eats all free cursors. After explicit call of statement.close() the error disappears.
Moral - always close everything, don't rely on automatic close after disposing Statement.
Just be careful setting OPEN_CURSORS to higher and higher values as there are overheads and it could just be band-aiding over an actual problem/error in your code.
I don't have experience with the Spring side of this but worked on an app where we had many issues with ORA-01000 errors and constantly adjusting OPEN_CURSORS just made the problem go away for a little while ...
I can promise you that it's not Spring. I worked on a Spring 1.x app that went live in 2005 and hasn't leaked a connection since. (WebLogic 9., JDK 5). You aren't closing your resources properly.
Are you using a connection pool? Which app server are you deploying to? Which version of Spring? Oracle? Java? Details, please.
Oracle OPEN_CURSORS is the key alright. We have a small 24x7 app running against Oracle XE with only a few apparently open cursors. We had intermittent max open cursors errors until we set the OPEN_CURSORS initialization value to > 300
The solution is not in Spring, but in Oracle: you need to set the OPEN_CURSORS initialization parameter to some value higher than the default 50.
Oracle -- at least as-of 8i, perhaps it's changed -- would reparse JDBC PreparedStatement objects unless you left them open. This was expensive, and most people end up maintaining a fixed pool of open statements that are resubmitted.
(taking a quick look at the 10i docs, they explicitly note that the OCI driver will cache PreparedStatements, so I'm assuming that the native driver still recreates them each time)

Categories