Statement.setQueryTimeout doesn't work on Oracle 18c jdbc driver - java

We have a J2EE application on a payara 5.2020 server that executes a long running query (PL/SQL that executes for a couple of hours).
To avoid a timeout exception, we use this sentence at StatementLevel:
statement.setQueryTimeout(0);
This works using Oracle jdbc drivers version 12c, but when we have migrated to Oracle 18c, and we changed the driver to the version 18c, the query execution stops after 15 minutes with this exception. The code works with Oracle 12 and Oracle 18 is the change in the driver's jar what brings up the problem.
The problem has been reproduced in Linux and Windows machines:
2021-06-14T07:50:01.762+0200|SEVERE: java.sql.SQLRecoverableException: Error de E/S: Socket read interrupted
at oracle.jdbc.driver.T4CCallableStatement.executeForRows(T4CCallableStatement.java:946)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1136)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3640)
at oracle.jdbc.driver.T4CCallableStatement.executeInternal(T4CCallableStatement.java:1318)
at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3752)
at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:4242)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1079)
at com.sun.gjc.spi.base.PreparedStatementWrapper.execute(PreparedStatementWrapper.java:532)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at com.sun.gjc.spi.jdbc40.ProfiledConnectionWrapper40$1.invoke(ProfiledConnectionWrapper40.java:437)
at com.sun.proxy.$Proxy324.execute(Unknown Source)
at org.apache.jsp.index_jsp.callPL(index_jsp.java:49)
at org.apache.jsp.index_jsp._jspService(index_jsp.java:108)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:750)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:411)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:473)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:377)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:750)
at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1636)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:259)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:757)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:577)
at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:99)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:158)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:371)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:238)
at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:520)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:217)
at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:182)
at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:156)
at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:218)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:95)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:260)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:177)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:109)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:88)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:53)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:524)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:89)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:94)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:33)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:114)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:569)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:549)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.io.InterruptedIOException: Socket read interrupted
at oracle.net.nt.TimeoutSocketChannel.handleInterrupt(TimeoutSocketChannel.java:262)
at oracle.net.nt.TimeoutSocketChannel.read(TimeoutSocketChannel.java:184)
at oracle.net.ns.NSProtocolNIO.doSocketRead(NSProtocolNIO.java:544)
at oracle.net.ns.NIOPacket.readHeader(NIOPacket.java:234)
at oracle.net.ns.NIOPacket.readPacketFromSocketChannel(NIOPacket.java:174)
at oracle.net.ns.NIOPacket.readFromSocketChannel(NIOPacket.java:122)
at oracle.net.ns.NIOPacket.readFromSocketChannel(NIOPacket.java:100)
at oracle.net.ns.NIONSDataChannel.readDataFromSocketChannel(NIONSDataChannel.java:86)
at oracle.jdbc.driver.T4CMAREngineNIO.prepareForUnmarshall(T4CMAREngineNIO.java:762)
at oracle.jdbc.driver.T4CMAREngineNIO.unmarshalUB1(T4CMAREngineNIO.java:427)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:394)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:255)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:610)
at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:249)
at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:82)
at oracle.jdbc.driver.T4CCallableStatement.executeForRows(T4CCallableStatement.java:924)
It seems the transport layer has been migrated to java.nio and the method setQueryTimeout is no longer
Things We've tried:
Setting the default Statement timeout to -1 in the JDBC Connection Pool Advanced Attributes screen in payara console.
Trying to set the time directly in the connection with conn.setNetworkTimeout(Executors.newFixedThreadPool(1), 0) didn't make any effect.
In different sources we have found that these properties below should affect the network timeout evaluation. We set them as JVM properties for payara startup (-Doracle.net.CONNECT_TIMEOUT=xxx) and as JDBC Connection Pool Properties, both cases with values 0 and -1. Didn't work in any case.
oracle.net.CONNECT_TIMEOUT
oracle.net.READ_TIMEOUT
oracle.jdbc.ReadTimeout
Sources:
Oracle 18c Net services best practices
Oracle 18c java jdbc reference. E.1.5 Using JDBC with Firewalls
4.- As we are accessing the DataSource through payara DataSource pool, we cannot cast the com.sun.gjc.spi.jdbc40.DataSource40 (class provided by payara) to an OracleDataSource, but we created the DataSorce programatically to set the connection properties as shown here and setting the properties seen in the image above but it doesn't work:
public static Properties oracleProperties() {
// Already tried -1 and 0
Properties properties = new Properties();
properties.put("Oracle.net.CONNECT_TIMEOUT", 0);
properties.put("Oracle.net.READ_TIMEOUT", 0);
properties.put("Oracle.jdbc.ReadTimeout", 0);
return properties;
}
public static OracleDataSource createDataSource() throws Exception {
OracleDataSource ods = new OracleDataSource();
ods.setURL("jdbc:oracle:thin:#itauc4602x:1521/BDExp");
ods.setUser("enevac");
ods.setPassword("enevac");
ods.setDataSourceName("OracleXADataSource");
ods.setLoginTimeout(0);
// default connection properties to avoid timeoutException
ods.setConnectionProperties(oracleProperties());
return ods;
}
Has anyone faced this problem, any idea on how to avoid the timeOut restriction?
Why 15 minutes?, according to the reference, the default value for oracle.net.ReadTimeout is 10 minutes.
Update:
To explain in more detail why I think the problem is in the driver and why I discard other possible origins of the exception, I assume the timeout can be raised from three sources:
Network timeout: I discard it cause I'm testing a payara server in my local machine against the development database, with no firewall in between.
Database server: the DBA has checked the Oracle net services configuration and there's no limit set that explains the 15 minutes cut. Besides, in these case, an SQLException would be expected with some kind of ORA-xxx error code.
JDBC: this can be set at connection level, statement level and transaction level. As I said at the beginning, the code works with oracle 12c drivers against Oracle 12 and Oracle 18 servers, it was the change of the driver's jar what make the code stop working.

Finally the problem was fixed configuring in the payara pool the "connectionProperties" custom property of the OracleDataSource. As #ibre5041 pointed, setting the property oracle.jdbc.javaNetNio=false changes the transport layer used by the driver and it starts working as the oracle 12c previous version.
According to Oracle reference, the OracleDataSource implementors can receive the connection properties as a java.util.Properties object.
Table 8-2 Oracle Extended Data Source Properties
Name: connectionProperties
Type: java.util.Properties
Description: Specifies the connection properties.
To set a multivalued property to the jdbc pool in the Payara Admin Console, you have to set the properties as (prop1=value1,prop2=value2), (Thank you again Ondro Mihályi). So in our case we set:
connectionProperties = (oracle.jdbc.ReadTimeout=0, oracle.jdbc.javaNetNio=false)
As a summary of what works and doesn't using Oracle 18c jdbc driver (every step tested separately)::
Setting timeout at statement level doesn't work:
statement.setQueryTimeout(0)
Setting timeout at connection level, with -1 or 0, doesn't work:
conn.setNetworkTimeout(Executors.newFixedThreadPool(1), timeout in ms)
Setting timeout properties programmatically in the java.util.connection using the OracleDataSource makes it work as indicated in the question.
Setting timeout properties as JVM properties makes it work if the limit is below 15min, but if you set a value > 15 minutes the exceptions is thrown, so setting to 0 or -1 has no effect:
So this makes the query stop after 10 secs:
-Doracle.net.CONNECT_TIMEOUT=10000 -Doracle.net.READ_TIMEOUT=10000 -Doracle.jdbc.ReadTimeout=10000
But with this stops after 15 minutes:
-Doracle.net.CONNECT_TIMEOUT=-1 -Doracle.net.READ_TIMEOUT=-1 -Doracle.jdbc.ReadTimeout=-1
Setting oracle.net.keepAlive=true as JVM property as #Nirmala suggested doesn’t work.
Setting oracle.jdbc.javaNetNio=false as JVM property as #ibre5041 makes it work. So it points to some problem with the java.nio transport layer.
Anyway, we opened a support issue to Oracle, cause the jdbc api statement.setQueryTimeout(0) should work without having to configure the datasource, I'll put the response when the case is closed.

The query execution could be stopped because of default tcp connection timeout. Can you set keepalive property "oracle.net.keepAlive" to “true” and verify?

Related

multiple Threads read to the same table in database by using the same connection in java?

I have defined three transaction in which select operations and SELECT operations are happening on the different parameter passed . I try to invoke this method concurrently . I am get an error:
o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 0, SQLState: null
Aug 25, 2020 # 12:16:39.000 2020-08-25 06:46:39.388 ERROR 1 --- [o-9003-exec-630] o.h.engine.jdbc.spi.SqlExceptionHelper : Hikari - Connection is not available, request timed out after 60000ms.
And sometimes
org.postgresql.util.PSQLException: FATAL: remaining connection slots are reserved for non-replication superuser connections
I am new to java. Please guide me to solve this issue. Do I need to write multithreading to access number of resources or configuration issue?
hikari:
poolName: Hikari
autoCommit: false
minimumIdle: 5
connectionTimeout: 60000
maximumPoolSize: 80
idleTimeout: 60000
maxLifetime: 240000
leakDetectionThreshold: 300000
Multiple Threads read to the same table in database by using the same connection in java?
This is generally speaking not going to work. The JDBC API types Connection, Statement, ResultSet and so on are not generally thread-safe1. You should not try to use on instance in multiple threads.
If you want to avoid having multiple connections open the normal approach is to use a JDBC connection pool to manage the connections. When a thread needs to talk to the database, it gets a connection from the pool. When it has finished talking to the database, it releases it back to the pool.
In the PostgreSQL / Hikari case:
For PostgreSQL - "Using the driver in a multi-threaded or a servlet environment"
For Hikari - the getConnection() call is thread-safe, but I couldn't find anything that explicitly talked about the thread-safety of the connection object when shared by multiple threads.
1 - I have seen it stated that a spec compliant JDBC driver should be thread-safe, but I could not see where the JDBC spec actually requires this to be so. But even assuming that it does say that somewhere, the threads sharing a connection would need to coordinate very carefully to avoid things like one thread causing another thread's resultset to "spontaneously" close.

How to solve "Socket read timed out" when using hikari connection pool

I am developing an application using play framework (version 2.8.0), java(version 1.8) with an oracle database(version 12C).
There is only zero or one hit to the database in a day, I am getting below error.
java.sql.SQLRecoverableException: IO Error: Socket read timed out
at oracle.jdbc.driver.T4CConnection.logoff(T4CConnection.java:919)
at oracle.jdbc.driver.PhysicalConnection.close(PhysicalConnection.java:2005)
at com.zaxxer.hikari.pool.PoolBase.quietlyCloseConnection(PoolBase.java:138)
at com.zaxxer.hikari.pool.HikariPool.lambda$closeConnection$1(HikariPool.java:447)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketTimeoutException: Socket read timed out
at oracle.net.nt.TimeoutSocketChannel.read(TimeoutSocketChannel.java:174)
at oracle.net.ns.NIOHeader.readHeaderBuffer(NIOHeader.java:82)
at oracle.net.ns.NIOPacket.readFromSocketChannel(NIOPacket.java:139)
at oracle.net.ns.NIOPacket.readFromSocketChannel(NIOPacket.java:101)
at oracle.net.ns.NIONSDataChannel.readDataFromSocketChannel(NIONSDataChannel.java:80)
at oracle.jdbc.driver.T4CMAREngineNIO.prepareForReading(T4CMAREngineNIO.java:98)
at oracle.jdbc.driver.T4CMAREngineNIO.unmarshalUB1(T4CMAREngineNIO.java:534)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:485)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:252)
at oracle.jdbc.driver.T4C7Ocommoncall.doOLOGOFF(T4C7Ocommoncall.java:62)
at oracle.jdbc.driver.T4CConnection.logoff(T4CConnection.java:908)
... 6 common frames omitted
db {
default {
driver=oracle.jdbc.OracleDriver
url="jdbc:oracle:thin:#XXX.XXX.XXX.XX:XXXX/XXXXXXX"
username="XXXXXXXXX"
password="XXXXXXXXX"
hikaricp {
dataSource {
cachePrepStmts = true
prepStmtCacheSize = 250
prepStmtCacheSqlLimit = 2048
}
}
}
}
It seems it is causing due to inactive database connection, How can I solve this?
Please let me know if any other information is required?
You can enable TCP keepalive for JDBC - either be setting directive or by adding "ENABLE=BROKEN" into connection string.
Usually Cisco/Juniper cuts off TCP connection when it is inactive for more that on hour.
While Linux kernel starts sending keepalive probes after two hours(tcp_keepalive_time). So if you decide to turn tcp keepalive on, you will also need root, to change this kernel tunable to lower value(10-15 minutes)
Moreover HikariCP should not keep open any connection for longer than 30 minutes - by default.
So if your FW, Linux kernel and HikariCP all use default settings, then this error should not occur in your system.
See HikariCP official documentation
maxLifetime:
This property controls the maximum lifetime of a connection in the
pool. An in-use connection will never be retired, only when it is
closed will it then be removed. On a connection-by-connection basis,
minor negative attenuation is applied to avoid mass-extinction in the
pool. We strongly recommend setting this value, and it should be
several seconds shorter than any database or infrastructure imposed
connection time limit. A value of 0 indicates no maximum lifetime
(infinite lifetime), subject of course to the idleTimeout setting. The
minimum allowed value is 30000ms (30 seconds). Default: 1800000 (30
minutes)
I have added the below configuration for hickaricp in configuration file and it is
working fine.
## Database Connection Pool
play.db.pool = hikaricp
play.db.prototype.hikaricp.connectionTimeout=120000
play.db.prototype.hikaricp.idleTimeout=15000
play.db.prototype.hikaricp.leakDetectionThreshold=120000
play.db.prototype.hikaricp.validationTimeout=10000
play.db.prototype.hikaricp.maxLifetime=120000

Getting org.apache.openjpa.persistence.OptimisticLockException: Unable to obtain an object lock on "null"

I am getting below exception while doing EntityManager.find(). We are using DB2 database and WAS 8.0 server for our application. Any help greatly appreciated.
Caused by: <openjpa-2.1.2-SNAPSHOT-r422266:1709309 nonfatal store error> org.apache.openjpa.persistence.OptimisticLockException: Unable to obtain an object lock on "null".
at org.apache.openjpa.jdbc.sql.DBDictionary.narrow(DBDictionary.java:4930)
at org.apache.openjpa.jdbc.sql.DBDictionary.newStoreException(DBDictionary.java:4908)
at org.apache.openjpa.jdbc.sql.DB2Dictionary.newStoreException(DB2Dictionary.java:603)
at org.apache.openjpa.jdbc.sql.SQLExceptions.getStore(SQLExceptions.java:136)
at org.apache.openjpa.jdbc.sql.SQLExceptions.getStore(SQLExceptions.java:110)
at org.apache.openjpa.jdbc.sql.SQLExceptions.getStore(SQLExceptions.java:62)
at org.apache.openjpa.jdbc.kernel.PreparedSQLStoreQuery$PreparedSQLExecutor.executeQuery(PreparedSQLStoreQuery.java:139)
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:1012)
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:870)
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:801)
at org.apache.openjpa.kernel.DelegatingQuery.execute(DelegatingQuery.java:542)
at org.apache.openjpa.persistence.QueryImpl.execute(QueryImpl.java:315)
at org.apache.openjpa.persistence.QueryImpl.getResultList(QueryImpl.java:331)
... 116 more
Caused by: org.apache.openjpa.lib.jdbc.ReportingSQLException: DB2 SQL Error: SQLCODE=-913, SQLSTATE=57033, SQLERRMC=00C90088;00000304;ODNC001 .SNCPC145.X'200D65' '.X'43', DRIVER=4.15.134 {prepstmnt -1803801027
SELECT a.column1
FROM table_test a
WHERE (a.column2 = ? AND a.column3 = ?)
[params=(String) 00000, (String) 000011]} [code=-913, state=57033]
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.wrap(LoggingConnectionDecorator.java:281)
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.wrap(LoggingConnectionDecorator.java:265)
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.access$700(LoggingConnectionDecorator.java:72)
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator$LoggingConnection$LoggingPreparedStatement.executeQuery(LoggingConnectionDecorator.java:1183)
at org.apache.openjpa.lib.jdbc.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:284)
at org.apache.openjpa.jdbc.kernel.JDBCStoreManager$CancelPreparedStatement.executeQuery(JDBCStoreManager.java:1787)
at org.apache.openjpa.lib.jdbc.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:274)
at org.apache.openjpa.jdbc.kernel.PreparedSQLStoreQuery$PreparedSQLExecutor.executeQuery(PreparedSQLStoreQuery.java:118)
... 123 more
sqlode -913 with SQLERRMC=00C90088 means that your connection experienced a DEADLOCK.
If your Db2-server is running on Z/OS, then ask your DBA for help to find the other Db2-connection and the SQL-statements running in both transactions.The access-plans and isolation levels used by both connections are also relevant, as are any applicable lock timeouts. The Db2-server DBA has access to diagnostic tools to help you.
There are many hits online giving advice on how to reduce the likelihood of Db2 deadlocks, so do your research.
You will need to know the isolation level being used by the Websphere connection (or package, or SQL-statement(s)), and all the statements in the Db2-transaction for your connection.
The other tokens in the message are also relevant i.e. ODNC001.SNCPC145 may be the involved table.
The version of the jdbc type4 driver being used by your Websphere is out of date (looks like it is from a Db2 v10.1 fixpack 5 build) so consider getting that upgraded to a current version.

trying to find db connection leak in my code, using Spring / JPA / Hikari

I've got a problem with a Spring web application that periodically runs into an error fetching a connection from my connection pool. Eventually in the logs I see entries like:
Caused by: javax.persistence.PersistenceException: org.hibernate.exception.JDBCConnectionException: Unable to acquire JDBC Connection
Caused by: java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30000ms.
Only way to recover I've found once it hits this point is to restart Tomcat.
I think the most likely explanation is I have some code somewhere that is not properly cleaning up its connection - returning it to Hikari, leaving something open so Spring can't clean it up, etc.
To troubleshoot I've set my hikari config leakDetectionThreshold to 5000ms and enabled logging. After that, I see log entries like
2018-04-24 19:53:56 WARN ProxyLeakTask:87 - Connection leak detection
triggered for org.postgresql.jdbc.PgConnection#664ec666, stack trace
follows
java.lang.Exception: Apparent connection leak detected
at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:122)
at org.hibernate.internal.NonContextualJdbcConnectionAccess.obtainConnection(NonContextualJdbcConnectionAccess.java:35)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.acquireConnectionIfNeeded(LogicalConnectionManagedImpl.java:99)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.getPhysicalConnection(LogicalConnectionManagedImpl.java:129)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl.connection(StatementPreparerImpl.java:47)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$5.doPrepare(StatementPreparerImpl.java:146)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$StatementPreparationTemplate.prepareStatement(StatementPreparerImpl.java:172)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl.prepareQueryStatement(StatementPreparerImpl.java:148)
at org.hibernate.loader.Loader.prepareQueryStatement(Loader.java:1940)
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1909)
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1887)
at org.hibernate.loader.Loader.doQuery(Loader.java:932)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:349)
at org.hibernate.loader.Loader.doList(Loader.java:2615)
at org.hibernate.loader.Loader.doList(Loader.java:2598)
at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2430)
at org.hibernate.loader.Loader.list(Loader.java:2425)
at org.hibernate.loader.custom.CustomLoader.list(CustomLoader.java:335)
at org.hibernate.internal.SessionImpl.listCustomQuery(SessionImpl.java:2129)
at org.hibernate.internal.AbstractSharedSessionContract.list(AbstractSharedSessionContract.java:981)
at org.hibernate.query.internal.NativeQueryImpl.doList(NativeQueryImpl.java:147)
at org.hibernate.query.internal.AbstractProducedQuery.list(AbstractProducedQuery.java:1398)
at org.hibernate.query.internal.AbstractProducedQuery.getSingleResult(AbstractProducedQuery.java:1444)
at sun.reflect.GeneratedMethodAccessor191.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.orm.jpa.SharedEntityManagerCreator$DeferredQueryInvocationHandler.invoke(SharedEntityManagerCreator.java:379)
at com.sun.proxy.$Proxy163.getSingleResult(Unknown Source)
at com.mycompany.web.jpa.util.DBHelper.getPagedMappedDbResults(DBHelper.java:76)
at com.mycompany.web.jpa.repository.TaskRepositoryImpl.findTaskDetailsByStepIdAndIdIn(TaskRepositoryImpl.java:245)
......
So it is detecting a possible leak. Could be a false positive I suppose? But this is also the only class in my app that is doing database access outside of the standard service/repository pattern often used in Spring apps, so it seems like a likely culprit, and it's my best lead at the moment.
Anyway, the last piece of non library code I see in the trace (ie stuff I wrote, so most likely to be the cause of the leak!) is my DBHelper::getPagedMappedDbResults method, relevant bit included here:
Query q = entityManager.createNativeQuery(countQueryText);
setQueryParameters(q, parameters);
long numActualResults = 0;
try {
numActualResults = ((Number)q.getSingleResult()).longValue(); // line 76
} catch (Exception e) {
System.out.println("just in case: " + e);
}
So basically I create a Query object from my EntityManager instance, set some parameters, and run it to get some results.
Is there something I need to be doing with a Query object when I'm done with it? q.cleanup()? I don't see anything like this from reading the docs, but am I not doing good housekeeping on this resource?
The entityManager itself is created from an #Autowired annotation. My understanding is if I didn't "new" it to instantiate it and instead let the Spring framework autowire it, then Spring will do whatever cleanup is necessary. Is that right? Or do I need to be doing some cleanup after I use the entityManager?
Version details:
Tomcat 8 / Java 8
Spring 5.0.0.RELEASE
Spring Data Kay-RELEASE
Hibernate 5.2.3.Final
Hikari 2.4.5
Any advice or suggestions would be greatly appreciated, thanks!
What is the query? Is it heavy? Maybe you have deadlock here? Connection management looks fine. You do not acquire connection explicitly, so no need to release it. The query might be long running so Hibernate is not able to complete it and release the connection.
Also, you can check the number of open connections on the DB side. Do some analysis on that side as well.

Single JDBC OracleDataSource/HikariCP with primary/backup DB

I'm trying to set up a single connection pool which references our primary database until said becomes unhealthy and after which the pool fails over, filling up against our backup. Until now I've been taking advantage of an undocumented feature of our application server's JNDI datasources which allows me to specify 2 JDBC connection URL strings thusly:
jdbc:oracle:thin:#primary:1521:DB|jdbc:oracle:thin:#backup:1521:DB
I have the following code, no doubt partially cribbed from some Hikari/Spring documentation months ago.
#Bean(name = "dataSource")
public DataSource dataSource() throws SQLException {
String userName = "user";
String password = "pass";
String server = "primary";
String database = "DB";
OracleDataSource ods = new OracleDataSource();
ods.setServerName(server);
ods.setDatabaseName(database);
ods.setNetworkProtocol("tcp");
ods.setUser(userName);
ods.setPassword(password);
ods.setPortNumber(1521);
ods.setDriverType("thin");
HikariConfig hkConfig = new HikariConfig();
hkConfig.setDataSource(ods);
hkConfig.setDataSourceClassName("oracle.jdbc.pool.OracleDataSource");
hkConfig.setPoolName("springHikariRECPool");
hkConfig.setMaximumPoolSize(15);
hkConfig.setMinimumIdle(3);
hkConfig.setMaxLifetime(1800000); // 30 minutes
return new HikariDataSource(hkConfig);
}
My Google-Fu has failed me. Does anyone have any ideas on how to achieve the failover functionality?
Edit - re. #M. Deinum "Remove the construction of the OracleDataSource and just set the url on the HikariConfig."
HikariConfig hkConfig = new HikariConfig();
hkConfig.setUsername(userName);
hkConfig.setPassword(password);
hkConfig.setJdbcUrl("jdbc:oracle:thin:#primary:1521:DB|jdbc:oracle:thin:#backup:1521:DB");
hkConfig.setDataSourceClassName("oracle.jdbc.pool.OracleDataSource");
hkConfig.setPoolName("springHikariRECPool");
hkConfig.setMaximumPoolSize(15);
hkConfig.setMinimumIdle(3);
hkConfig.setMaxLifetime(1800000);
Unfortunately, this yields a fairly long stack, the base of which is this:
Caused by: java.sql.SQLException: Invalid Oracle URL specified: OracleDataSource.makeURL
at oracle.jdbc.pool.OracleDataSource.makeURL(OracleDataSource.java:1277)
at oracle.jdbc.pool.OracleDataSource.getConnection(OracleDataSource.java:185)
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:356)
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:199)
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:444)
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:515)
Investigation of that here - Hikaricp Oracle connection issue and here - Invalid Oracle URL specified: OracleDataSource.makeURL causes me to add some additional properties.
hkConfig.addDataSourceProperty("portNumber", "1521");
hkConfig.addDataSourceProperty("driverType", "thin");
Which now bombs with:
Caused by: java.net.UnknownHostException: null: Name or service not known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:117)
at oracle.net.nt.ConnOption.connect(ConnOption.java:133)
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:370)
The JDBC URL is no longer being referenced, it would appear. . . and, confirmed - I took the backup connection string out of the URL and reached the same exception with a standard, single server connection. So it appears the ODS demands to be configured as originally done (or mimicked with Properties).
As a last gasp for this method, I tried setting the serverName property to "primary|standby" and, as expected, that blew up as well:
Caused by: java.net.UnknownHostException: primary|backup: Name or service not known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:117)
at oracle.net.nt.ConnOption.connect(ConnOption.java:133)
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:411)
... 56 more
I have failed to note thus far that I am using ojdbc7.jar.
Use standard way. Support for DataGuard, failover, RAC is native feature of Oracle JDBC drivers.
1st use tnsnames.ora as described here "How to connect JDBC to tns oracle"
2nd use multiple hosts in tnsnames.ora:
DB =
(DESCRIPTION=
(ADDRESS_LIST=
(LOAD_BALANCE=off)
(FAILOVER=ON)
(ADDRESS=(PROTOCOL=TCP)( HOST=primary)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)( HOST=backup)(PORT=1521)))
(CONNECT_DATA=(SERVICE_NAME=DB)))
Oracle JDBC driver will connect to the host, where database is "OPEN" and the service named "DB" is present.
PS: you can also pass the whole tns connection string to the jdbc driver directly as a parameter.
url="jdbc:oracle:thin:#(DESCRIPTION=
(LOAD_BALANCE=on)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=primary)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=secondary)(PORT=1521)))
(CONNECT_DATA=(SERVICE_NAME=DB)))"

Categories