HikariCP on AppEngine - java

When deploying a Spring Boot application using HikariCP as the connection pool on Appengine, I get some errors related to database (threads), when performing some requests:
Caused by: java.sql.SQLNonTransientConnectionException: Could not create connection to database server.
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:110)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:89)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:63)
at com.mysql.cj.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:1008)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:825)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:455)
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:240)
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:207)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:136)
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:369)
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:198)
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:467)
at com.zaxxer.hikari.pool.HikariPool.access$100(HikariPool.java:71)
at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:706)
at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:692)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
Caused by: com.google.apphosting.api.ApiProxy$CallNotFoundException: Can't make API call memcache.Get in a thread that is neither the original request thread nor a thread created by ThreadManager
Then I discovered that AppEngine only allow applications to create threads using its ThreadFactory. So I made sure to configure my Hikari to use AppEngine's Thread factory like the following :
DataSource ds = new HikariDataSource();
try {
final HikariConfig dataSourceConfig = new HikariConfig();
dataSourceConfig.setDriverClassName(applicationProperties.getDatasource()
.getDriverClassName());
dataSourceConfig.setJdbcUrl(applicationProperties.getDatasource()
.getUrl());
dataSourceConfig.setUsername(applicationProperties.getDatasource()
.getUsername());
dataSourceConfig.setPassword(applicationProperties.getDatasource()
.getPassword());
dataSourceConfig.setRegisterMbeans(false);
if (Objects.equal(ProfileResolver.getActiveCloudPlatform(env), ProfileConstants.SPRING_PROFILE_GCP)) {
log.info("[GCP] Set 'com.google.appengine.api.ThreadManager.backgroundThreadFactory()' "
+ "as the instance of the java.util.concurrent.ThreadFactory");
dataSourceConfig.setThreadFactory(ThreadManager.backgroundThreadFactory());
}
ds = new HikariDataSource(dataSourceConfig);
} catch (final Exception e) {
throw new IllegalStateException(e);
}
return ds;
It works on my local appengine (DevServer), But when deployed I get an exception at datasource initialisation, since Appengine's autoscaling modules doesn't allow the use of Background threads.
Is it possible to keep the "autoscaling" ability while using HikariCP on AppEngine ?

The Java 8 runtime doesn't have the same restrictions around threads like prior versions of App Engine. For example, this sample app uses HikariCP to connect to Cloud SQL, and works without a custom thread manager.

Related

Statement.setQueryTimeout doesn't work on Oracle 18c jdbc driver

We have a J2EE application on a payara 5.2020 server that executes a long running query (PL/SQL that executes for a couple of hours).
To avoid a timeout exception, we use this sentence at StatementLevel:
statement.setQueryTimeout(0);
This works using Oracle jdbc drivers version 12c, but when we have migrated to Oracle 18c, and we changed the driver to the version 18c, the query execution stops after 15 minutes with this exception. The code works with Oracle 12 and Oracle 18 is the change in the driver's jar what brings up the problem.
The problem has been reproduced in Linux and Windows machines:
2021-06-14T07:50:01.762+0200|SEVERE: java.sql.SQLRecoverableException: Error de E/S: Socket read interrupted
at oracle.jdbc.driver.T4CCallableStatement.executeForRows(T4CCallableStatement.java:946)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1136)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3640)
at oracle.jdbc.driver.T4CCallableStatement.executeInternal(T4CCallableStatement.java:1318)
at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3752)
at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:4242)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1079)
at com.sun.gjc.spi.base.PreparedStatementWrapper.execute(PreparedStatementWrapper.java:532)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at com.sun.gjc.spi.jdbc40.ProfiledConnectionWrapper40$1.invoke(ProfiledConnectionWrapper40.java:437)
at com.sun.proxy.$Proxy324.execute(Unknown Source)
at org.apache.jsp.index_jsp.callPL(index_jsp.java:49)
at org.apache.jsp.index_jsp._jspService(index_jsp.java:108)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:750)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:411)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:473)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:377)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:750)
at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1636)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:259)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:757)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:577)
at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:99)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:158)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:371)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:238)
at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:520)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:217)
at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:182)
at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:156)
at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:218)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:95)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:260)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:177)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:109)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:88)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:53)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:524)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:89)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:94)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:33)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:114)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:569)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:549)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.io.InterruptedIOException: Socket read interrupted
at oracle.net.nt.TimeoutSocketChannel.handleInterrupt(TimeoutSocketChannel.java:262)
at oracle.net.nt.TimeoutSocketChannel.read(TimeoutSocketChannel.java:184)
at oracle.net.ns.NSProtocolNIO.doSocketRead(NSProtocolNIO.java:544)
at oracle.net.ns.NIOPacket.readHeader(NIOPacket.java:234)
at oracle.net.ns.NIOPacket.readPacketFromSocketChannel(NIOPacket.java:174)
at oracle.net.ns.NIOPacket.readFromSocketChannel(NIOPacket.java:122)
at oracle.net.ns.NIOPacket.readFromSocketChannel(NIOPacket.java:100)
at oracle.net.ns.NIONSDataChannel.readDataFromSocketChannel(NIONSDataChannel.java:86)
at oracle.jdbc.driver.T4CMAREngineNIO.prepareForUnmarshall(T4CMAREngineNIO.java:762)
at oracle.jdbc.driver.T4CMAREngineNIO.unmarshalUB1(T4CMAREngineNIO.java:427)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:394)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:255)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:610)
at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:249)
at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:82)
at oracle.jdbc.driver.T4CCallableStatement.executeForRows(T4CCallableStatement.java:924)
It seems the transport layer has been migrated to java.nio and the method setQueryTimeout is no longer
Things We've tried:
Setting the default Statement timeout to -1 in the JDBC Connection Pool Advanced Attributes screen in payara console.
Trying to set the time directly in the connection with conn.setNetworkTimeout(Executors.newFixedThreadPool(1), 0) didn't make any effect.
In different sources we have found that these properties below should affect the network timeout evaluation. We set them as JVM properties for payara startup (-Doracle.net.CONNECT_TIMEOUT=xxx) and as JDBC Connection Pool Properties, both cases with values 0 and -1. Didn't work in any case.
oracle.net.CONNECT_TIMEOUT
oracle.net.READ_TIMEOUT
oracle.jdbc.ReadTimeout
Sources:
Oracle 18c Net services best practices
Oracle 18c java jdbc reference. E.1.5 Using JDBC with Firewalls
4.- As we are accessing the DataSource through payara DataSource pool, we cannot cast the com.sun.gjc.spi.jdbc40.DataSource40 (class provided by payara) to an OracleDataSource, but we created the DataSorce programatically to set the connection properties as shown here and setting the properties seen in the image above but it doesn't work:
public static Properties oracleProperties() {
// Already tried -1 and 0
Properties properties = new Properties();
properties.put("Oracle.net.CONNECT_TIMEOUT", 0);
properties.put("Oracle.net.READ_TIMEOUT", 0);
properties.put("Oracle.jdbc.ReadTimeout", 0);
return properties;
}
public static OracleDataSource createDataSource() throws Exception {
OracleDataSource ods = new OracleDataSource();
ods.setURL("jdbc:oracle:thin:#itauc4602x:1521/BDExp");
ods.setUser("enevac");
ods.setPassword("enevac");
ods.setDataSourceName("OracleXADataSource");
ods.setLoginTimeout(0);
// default connection properties to avoid timeoutException
ods.setConnectionProperties(oracleProperties());
return ods;
}
Has anyone faced this problem, any idea on how to avoid the timeOut restriction?
Why 15 minutes?, according to the reference, the default value for oracle.net.ReadTimeout is 10 minutes.
Update:
To explain in more detail why I think the problem is in the driver and why I discard other possible origins of the exception, I assume the timeout can be raised from three sources:
Network timeout: I discard it cause I'm testing a payara server in my local machine against the development database, with no firewall in between.
Database server: the DBA has checked the Oracle net services configuration and there's no limit set that explains the 15 minutes cut. Besides, in these case, an SQLException would be expected with some kind of ORA-xxx error code.
JDBC: this can be set at connection level, statement level and transaction level. As I said at the beginning, the code works with oracle 12c drivers against Oracle 12 and Oracle 18 servers, it was the change of the driver's jar what make the code stop working.
Finally the problem was fixed configuring in the payara pool the "connectionProperties" custom property of the OracleDataSource. As #ibre5041 pointed, setting the property oracle.jdbc.javaNetNio=false changes the transport layer used by the driver and it starts working as the oracle 12c previous version.
According to Oracle reference, the OracleDataSource implementors can receive the connection properties as a java.util.Properties object.
Table 8-2 Oracle Extended Data Source Properties
Name: connectionProperties
Type: java.util.Properties
Description: Specifies the connection properties.
To set a multivalued property to the jdbc pool in the Payara Admin Console, you have to set the properties as (prop1=value1,prop2=value2), (Thank you again Ondro Mihályi). So in our case we set:
connectionProperties = (oracle.jdbc.ReadTimeout=0, oracle.jdbc.javaNetNio=false)
As a summary of what works and doesn't using Oracle 18c jdbc driver (every step tested separately)::
Setting timeout at statement level doesn't work:
statement.setQueryTimeout(0)
Setting timeout at connection level, with -1 or 0, doesn't work:
conn.setNetworkTimeout(Executors.newFixedThreadPool(1), timeout in ms)
Setting timeout properties programmatically in the java.util.connection using the OracleDataSource makes it work as indicated in the question.
Setting timeout properties as JVM properties makes it work if the limit is below 15min, but if you set a value > 15 minutes the exceptions is thrown, so setting to 0 or -1 has no effect:
So this makes the query stop after 10 secs:
-Doracle.net.CONNECT_TIMEOUT=10000 -Doracle.net.READ_TIMEOUT=10000 -Doracle.jdbc.ReadTimeout=10000
But with this stops after 15 minutes:
-Doracle.net.CONNECT_TIMEOUT=-1 -Doracle.net.READ_TIMEOUT=-1 -Doracle.jdbc.ReadTimeout=-1
Setting oracle.net.keepAlive=true as JVM property as #Nirmala suggested doesn’t work.
Setting oracle.jdbc.javaNetNio=false as JVM property as #ibre5041 makes it work. So it points to some problem with the java.nio transport layer.
Anyway, we opened a support issue to Oracle, cause the jdbc api statement.setQueryTimeout(0) should work without having to configure the datasource, I'll put the response when the case is closed.
The query execution could be stopped because of default tcp connection timeout. Can you set keepalive property "oracle.net.keepAlive" to “true” and verify?

Local and Remote connection pools, Java DBCP

I have the following use-case:
I have a system that needs to use two different connection pools, One is for 'local' database (Meaning a database running on the local machine) and the other one is a 'remote' database. (Meaning a database that is running on a remote different server)
The remote database is a configuration sharing database, while the local one is has different kinds of data.
I've created two classes in order to connect to those datababase:
public class ConnectionPool {
private static BasicDataSource ds = createNewDatasource();
private static BasicDataSource createNewDatasource() {
BasicDataSource ds = new BasicDataSource();
String url = "jdbc:mysql://127.0.0.1:3306/SOME_DB"
ds.setUrl(url);
ds.setUsername(...);
ds.setPassword(...);
return ds;
}
public static Connection getConnection() throws SQLException {
return ds.getConnection();
}
}
The other class looks exactly the same, Only it's called RemoteConnection and the url is changed to:
String url = "jdbc:mysql://<REMOTE_IP>:3306/SOME_DB_2"
Running the above classes, I keep receiving the following message in my logs:
ERROR (RemoteConnection.java:40) - Failed on getConnection
java.sql.SQLException: Cannot create PoolableConnectionFactory (Access denied for user '...'#'<MACHINE_LOCAL_IP>' (using password: YES))
at org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:2291)
at org.apache.commons.dbcp2.BasicDataSource.createDataSource(BasicDataSource.java:2038)
at org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1533)
at Censored.RemoteConnection.getConnection(RemoteConnection.java:59)
...
Caused by: java.sql.SQLException: Access denied for user '...'#'<MACHINE_LOCAL_IP>' (using password: YES)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:965)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3973)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3909)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:873)
at com.mysql.jdbc.MysqlIO.proceedHandshakeWithPluggableAuthentication(MysqlIO.java:1710)
at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1226)
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2188)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2219)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2014)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:776)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:47)
at sun.reflect.GeneratedConstructorAccessor31.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:57)
at java.lang.reflect.Constructor.newInstance(Constructor.java:437)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:386)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:330)
at org.apache.commons.dbcp2.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:39)
at org.apache.commons.dbcp2.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:256)
at org.apache.commons.dbcp2.BasicDataSource.validateConnectionFactory(BasicDataSource.java:2301)
at org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:2287)
...
Firstly, The error above is weird. I've never used the machine local IP. So I don't understand where it came from. In addition, It doesn't seems like a privilege problem since I've tried logging into remote database through cli, using:
mysql -u'...' -p'...' -h <REMOTE_IP> SOME_DB_2
And it connected successfully. It smells to me like a JDBC Driver or connection definition problem but I can't seem to find the problematic spot.
Any idea what I'm doing wrong?
I've found the problem. It was an SSL issue.
The user I've used had to connect with SSL certificate.
That's also the reason why I've seen the local machine IP in the error.
A way to check is:
Select * from mysql.user where user='...';
What I've found was that:
ssl_type != ''
So I had to define:
jdbc:mysql://<REMOTE_IP>:3306/SOME_DB_2&useSSL=true
Configure the relevant certificates, and all was well.

trying to find db connection leak in my code, using Spring / JPA / Hikari

I've got a problem with a Spring web application that periodically runs into an error fetching a connection from my connection pool. Eventually in the logs I see entries like:
Caused by: javax.persistence.PersistenceException: org.hibernate.exception.JDBCConnectionException: Unable to acquire JDBC Connection
Caused by: java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30000ms.
Only way to recover I've found once it hits this point is to restart Tomcat.
I think the most likely explanation is I have some code somewhere that is not properly cleaning up its connection - returning it to Hikari, leaving something open so Spring can't clean it up, etc.
To troubleshoot I've set my hikari config leakDetectionThreshold to 5000ms and enabled logging. After that, I see log entries like
2018-04-24 19:53:56 WARN ProxyLeakTask:87 - Connection leak detection
triggered for org.postgresql.jdbc.PgConnection#664ec666, stack trace
follows
java.lang.Exception: Apparent connection leak detected
at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:122)
at org.hibernate.internal.NonContextualJdbcConnectionAccess.obtainConnection(NonContextualJdbcConnectionAccess.java:35)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.acquireConnectionIfNeeded(LogicalConnectionManagedImpl.java:99)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.getPhysicalConnection(LogicalConnectionManagedImpl.java:129)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl.connection(StatementPreparerImpl.java:47)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$5.doPrepare(StatementPreparerImpl.java:146)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$StatementPreparationTemplate.prepareStatement(StatementPreparerImpl.java:172)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl.prepareQueryStatement(StatementPreparerImpl.java:148)
at org.hibernate.loader.Loader.prepareQueryStatement(Loader.java:1940)
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1909)
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1887)
at org.hibernate.loader.Loader.doQuery(Loader.java:932)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:349)
at org.hibernate.loader.Loader.doList(Loader.java:2615)
at org.hibernate.loader.Loader.doList(Loader.java:2598)
at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2430)
at org.hibernate.loader.Loader.list(Loader.java:2425)
at org.hibernate.loader.custom.CustomLoader.list(CustomLoader.java:335)
at org.hibernate.internal.SessionImpl.listCustomQuery(SessionImpl.java:2129)
at org.hibernate.internal.AbstractSharedSessionContract.list(AbstractSharedSessionContract.java:981)
at org.hibernate.query.internal.NativeQueryImpl.doList(NativeQueryImpl.java:147)
at org.hibernate.query.internal.AbstractProducedQuery.list(AbstractProducedQuery.java:1398)
at org.hibernate.query.internal.AbstractProducedQuery.getSingleResult(AbstractProducedQuery.java:1444)
at sun.reflect.GeneratedMethodAccessor191.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.orm.jpa.SharedEntityManagerCreator$DeferredQueryInvocationHandler.invoke(SharedEntityManagerCreator.java:379)
at com.sun.proxy.$Proxy163.getSingleResult(Unknown Source)
at com.mycompany.web.jpa.util.DBHelper.getPagedMappedDbResults(DBHelper.java:76)
at com.mycompany.web.jpa.repository.TaskRepositoryImpl.findTaskDetailsByStepIdAndIdIn(TaskRepositoryImpl.java:245)
......
So it is detecting a possible leak. Could be a false positive I suppose? But this is also the only class in my app that is doing database access outside of the standard service/repository pattern often used in Spring apps, so it seems like a likely culprit, and it's my best lead at the moment.
Anyway, the last piece of non library code I see in the trace (ie stuff I wrote, so most likely to be the cause of the leak!) is my DBHelper::getPagedMappedDbResults method, relevant bit included here:
Query q = entityManager.createNativeQuery(countQueryText);
setQueryParameters(q, parameters);
long numActualResults = 0;
try {
numActualResults = ((Number)q.getSingleResult()).longValue(); // line 76
} catch (Exception e) {
System.out.println("just in case: " + e);
}
So basically I create a Query object from my EntityManager instance, set some parameters, and run it to get some results.
Is there something I need to be doing with a Query object when I'm done with it? q.cleanup()? I don't see anything like this from reading the docs, but am I not doing good housekeeping on this resource?
The entityManager itself is created from an #Autowired annotation. My understanding is if I didn't "new" it to instantiate it and instead let the Spring framework autowire it, then Spring will do whatever cleanup is necessary. Is that right? Or do I need to be doing some cleanup after I use the entityManager?
Version details:
Tomcat 8 / Java 8
Spring 5.0.0.RELEASE
Spring Data Kay-RELEASE
Hibernate 5.2.3.Final
Hikari 2.4.5
Any advice or suggestions would be greatly appreciated, thanks!
What is the query? Is it heavy? Maybe you have deadlock here? Connection management looks fine. You do not acquire connection explicitly, so no need to release it. The query might be long running so Hibernate is not able to complete it and release the connection.
Also, you can check the number of open connections on the DB side. Do some analysis on that side as well.

Single JDBC OracleDataSource/HikariCP with primary/backup DB

I'm trying to set up a single connection pool which references our primary database until said becomes unhealthy and after which the pool fails over, filling up against our backup. Until now I've been taking advantage of an undocumented feature of our application server's JNDI datasources which allows me to specify 2 JDBC connection URL strings thusly:
jdbc:oracle:thin:#primary:1521:DB|jdbc:oracle:thin:#backup:1521:DB
I have the following code, no doubt partially cribbed from some Hikari/Spring documentation months ago.
#Bean(name = "dataSource")
public DataSource dataSource() throws SQLException {
String userName = "user";
String password = "pass";
String server = "primary";
String database = "DB";
OracleDataSource ods = new OracleDataSource();
ods.setServerName(server);
ods.setDatabaseName(database);
ods.setNetworkProtocol("tcp");
ods.setUser(userName);
ods.setPassword(password);
ods.setPortNumber(1521);
ods.setDriverType("thin");
HikariConfig hkConfig = new HikariConfig();
hkConfig.setDataSource(ods);
hkConfig.setDataSourceClassName("oracle.jdbc.pool.OracleDataSource");
hkConfig.setPoolName("springHikariRECPool");
hkConfig.setMaximumPoolSize(15);
hkConfig.setMinimumIdle(3);
hkConfig.setMaxLifetime(1800000); // 30 minutes
return new HikariDataSource(hkConfig);
}
My Google-Fu has failed me. Does anyone have any ideas on how to achieve the failover functionality?
Edit - re. #M. Deinum "Remove the construction of the OracleDataSource and just set the url on the HikariConfig."
HikariConfig hkConfig = new HikariConfig();
hkConfig.setUsername(userName);
hkConfig.setPassword(password);
hkConfig.setJdbcUrl("jdbc:oracle:thin:#primary:1521:DB|jdbc:oracle:thin:#backup:1521:DB");
hkConfig.setDataSourceClassName("oracle.jdbc.pool.OracleDataSource");
hkConfig.setPoolName("springHikariRECPool");
hkConfig.setMaximumPoolSize(15);
hkConfig.setMinimumIdle(3);
hkConfig.setMaxLifetime(1800000);
Unfortunately, this yields a fairly long stack, the base of which is this:
Caused by: java.sql.SQLException: Invalid Oracle URL specified: OracleDataSource.makeURL
at oracle.jdbc.pool.OracleDataSource.makeURL(OracleDataSource.java:1277)
at oracle.jdbc.pool.OracleDataSource.getConnection(OracleDataSource.java:185)
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:356)
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:199)
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:444)
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:515)
Investigation of that here - Hikaricp Oracle connection issue and here - Invalid Oracle URL specified: OracleDataSource.makeURL causes me to add some additional properties.
hkConfig.addDataSourceProperty("portNumber", "1521");
hkConfig.addDataSourceProperty("driverType", "thin");
Which now bombs with:
Caused by: java.net.UnknownHostException: null: Name or service not known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:117)
at oracle.net.nt.ConnOption.connect(ConnOption.java:133)
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:370)
The JDBC URL is no longer being referenced, it would appear. . . and, confirmed - I took the backup connection string out of the URL and reached the same exception with a standard, single server connection. So it appears the ODS demands to be configured as originally done (or mimicked with Properties).
As a last gasp for this method, I tried setting the serverName property to "primary|standby" and, as expected, that blew up as well:
Caused by: java.net.UnknownHostException: primary|backup: Name or service not known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:117)
at oracle.net.nt.ConnOption.connect(ConnOption.java:133)
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:411)
... 56 more
I have failed to note thus far that I am using ojdbc7.jar.
Use standard way. Support for DataGuard, failover, RAC is native feature of Oracle JDBC drivers.
1st use tnsnames.ora as described here "How to connect JDBC to tns oracle"
2nd use multiple hosts in tnsnames.ora:
DB =
(DESCRIPTION=
(ADDRESS_LIST=
(LOAD_BALANCE=off)
(FAILOVER=ON)
(ADDRESS=(PROTOCOL=TCP)( HOST=primary)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)( HOST=backup)(PORT=1521)))
(CONNECT_DATA=(SERVICE_NAME=DB)))
Oracle JDBC driver will connect to the host, where database is "OPEN" and the service named "DB" is present.
PS: you can also pass the whole tns connection string to the jdbc driver directly as a parameter.
url="jdbc:oracle:thin:#(DESCRIPTION=
(LOAD_BALANCE=on)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=primary)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=secondary)(PORT=1521)))
(CONNECT_DATA=(SERVICE_NAME=DB)))"

Hibernate connecting to DB2 at application startup on WAS Liberty on CICS

We're running a simple webapp on WebSphere Liberty, that uses Hibernate as persistence provider (included as a library in the WAR file).
When application is starting up Hibernate is initialized and it will open a connection to DB2 and issue some SQL statements. However, this fails when running on CICS and using JDBC Type 2 Driver DataSource. The following messages are logged (some extra line breaks for readability):
WARN org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator -
HHH000342: Could not obtain connection to query metadata : [jcc][50053][12310][4.19.56]
T2zOS exception: [jcc][T2zos]T2zosCicsApi.checkApiStatus:
Thread is not CICS-DB2 compatible: CICS_REGION_BUT_API_DISALLOWED ERRORCODE=-4228, SQLSTATE=null
...
ERROR org.hibernate.hql.spi.id.IdTableHelper - Unable obtain JDBC Connection
com.ibm.db2.jcc.am.SqlException: [jcc][50053][12310][4.19.56] T2zOS exception: [jcc][T2zos]T2zosCicsApi.checkApiStatus:
Thread is not CICS-DB2 compatible: CICS_REGION_BUT_API_DISALLOWED ERRORCODE=-4228, SQLSTATE=null
at com.ibm.db2.jcc.am.kd.a(Unknown Source) ~[db2jcc4.jar:?]
...
at com.ibm.db2.jcc.t2zos.T2zosConnection.a(Unknown Source) ~[db2jcc4.jar:?]
...
at com.ibm.db2.jcc.DB2SimpleDataSource.getConnection(Unknown Source) ~[db2jcc4.jar:?]
at com.ibm.cics.wlp.jdbc.internal.CICSDataSource.getConnection(CICSDataSource.java:176) ~[?:?]
at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:122) ~[our-app.war:5.1.0.Final]
at org.hibernate.internal.SessionFactoryImpl$3.obtainConnection(SessionFactoryImpl.java:643) ~[our-app.war:5.1.0.Final]
at org.hibernate.hql.spi.id.IdTableHelper.executeIdTableCreationStatements(IdTableHelper.java:67) [our-app.war:5.1.0.Final]
at org.hibernate.hql.spi.id.global.GlobalTemporaryTableBulkIdStrategy.finishPreparation(GlobalTemporaryTableBulkIdStrategy.java:125) [our-app.war:5.1.0.Final]
at org.hibernate.hql.spi.id.global.GlobalTemporaryTableBulkIdStrategy.finishPreparation(GlobalTemporaryTableBulkIdStrategy.java:42) [our-app.war:5.1.0.Final]
at org.hibernate.hql.spi.id.AbstractMultiTableBulkIdStrategyImpl.prepare(AbstractMultiTableBulkIdStrategyImpl.java:88) [our-app.war:5.1.0.Final]
at org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:451) [our-app.war:5.1.0.Final]
My current understanding is that when running on CICS and using JDBC Type 2 Drivers only some threads are capable of opening a DB2 connection. That would be the application threads (the ones processing HTTP requests) as well as worker threads servicing CICSExecutorService.
The current solution is to:
Disable JDBC metadata lookup in JdbcEnvironmentInitiator by
setting hibernate.temp.use_jdbc_metadata_defaults property to
false
Wrap execution of IdTableHelper#executeIdTableCreationStatements in a Runnable and submit it to CICSExecutorService.
Would you consider this solution to be sufficient and suitable for production? Or maybe you use some different approach?
Versions used:
CICS Transaction Server for z/OS 5.3.0
WebSphere Application Server 8.5.5.8
Hibernate 5.1.0
Update: Just to clarify, once our application is started, it can query DB2 with no problems (when servicing HTTP requests). The problem is only related to startup.
CICS TS v5.3 support for the JPA feature in Liberty was recently made available in a service-refresh (July 2016). Prior to that update, attempting to run JPA in applications would result in very similar problems to those you describe.
Although you are running hibernate and you are on a CICS-enabled thread, it does not have the API environment (which will allow the type 2 JDBC call to succeed). New detection logic was developed specifically (but not exclusively) for use with the DB2 JDBC type 2 driver and JPA. This update was shipped in a recent service refresh and might cure the issues you are seeing.
Try applying:
http://www-01.ibm.com/support/docview.wss?crawler=1&uid=swg1PI58375
The description says it is for 'Standard-mode Liberty' support, but it contains other developments as outlined above.
The following solution was tested to work ok.
The idea is to execute the SQL/DDL statements using CICSExecutorService#runAsCICS. The following extension is registered via hibernate.hql.bulk_id_strategy property.
package org.hibernate.hql.spi.id.global;
import java.util.concurrent.*;
import org.hibernate.boot.spi.MetadataImplementor;
import org.hibernate.engine.jdbc.connections.spi.JdbcConnectionAccess;
import org.hibernate.engine.jdbc.spi.JdbcServices;
import org.springframework.util.ClassUtils;
import com.ibm.cics.server.*;
public class CicsAwareGlobalTemporaryTableBulkIdStrategy extends GlobalTemporaryTableBulkIdStrategy {
#Override
protected void finishPreparation(JdbcServices jdbcServices, JdbcConnectionAccess connectionAccess, MetadataImplementor metadata, PreparationContextImpl context) {
execute(() -> super.finishPreparation(jdbcServices, connectionAccess, metadata, context));
}
#Override
public void release(JdbcServices jdbcServices, JdbcConnectionAccess connectionAccess) {
execute(() -> super.release(jdbcServices, connectionAccess));
}
private void execute(Runnable runnable) {
if (isCics() && IsCICS.getApiStatus() == IsCICS.CICS_REGION_BUT_API_DISALLOWED) {
RunnableFuture<Void> task = new FutureTask<>(runnable, null);
CICSExecutorService.runAsCICS(task);
try {
task.get();
} catch (InterruptedException | ExecutionException e) {
throw new RuntimeException("Failed to execute in a CICS API-enabled thread. " + e.getMessage(), e);
}
} else {
runnable.run();
}
}
private boolean isCics() {
return ClassUtils.isPresent("com.ibm.cics.server.CICSExecutorService", null);
}
}
Note that the newer JCICS API version has an overlaod for runAsCics method accepting a Callable, which might be useful to simplify the CICS branch of the execute method to something like this:
CICSExecutorService.runAsCICS(() -> { runnable.run(); return null; }).get();
A few alternatives tried:
Wrapping just the connection acquisition action (org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl#getConnection) did not work as the connection was closed already when it was used in the main thread.
Wrapping the whole application startup (org.springframework.web.context.ContextLoaderListener#contextInitialized) led to classloading issues.
Edit: Eventually went with a custom Hibernate's MultiTableBulkIdStrategy implementation that does not run any SQL/DDL on startup (see project page on GitHub).

Categories