Hibernate connecting to DB2 at application startup on WAS Liberty on CICS - java

We're running a simple webapp on WebSphere Liberty, that uses Hibernate as persistence provider (included as a library in the WAR file).
When application is starting up Hibernate is initialized and it will open a connection to DB2 and issue some SQL statements. However, this fails when running on CICS and using JDBC Type 2 Driver DataSource. The following messages are logged (some extra line breaks for readability):
WARN org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator -
HHH000342: Could not obtain connection to query metadata : [jcc][50053][12310][4.19.56]
T2zOS exception: [jcc][T2zos]T2zosCicsApi.checkApiStatus:
Thread is not CICS-DB2 compatible: CICS_REGION_BUT_API_DISALLOWED ERRORCODE=-4228, SQLSTATE=null
...
ERROR org.hibernate.hql.spi.id.IdTableHelper - Unable obtain JDBC Connection
com.ibm.db2.jcc.am.SqlException: [jcc][50053][12310][4.19.56] T2zOS exception: [jcc][T2zos]T2zosCicsApi.checkApiStatus:
Thread is not CICS-DB2 compatible: CICS_REGION_BUT_API_DISALLOWED ERRORCODE=-4228, SQLSTATE=null
at com.ibm.db2.jcc.am.kd.a(Unknown Source) ~[db2jcc4.jar:?]
...
at com.ibm.db2.jcc.t2zos.T2zosConnection.a(Unknown Source) ~[db2jcc4.jar:?]
...
at com.ibm.db2.jcc.DB2SimpleDataSource.getConnection(Unknown Source) ~[db2jcc4.jar:?]
at com.ibm.cics.wlp.jdbc.internal.CICSDataSource.getConnection(CICSDataSource.java:176) ~[?:?]
at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:122) ~[our-app.war:5.1.0.Final]
at org.hibernate.internal.SessionFactoryImpl$3.obtainConnection(SessionFactoryImpl.java:643) ~[our-app.war:5.1.0.Final]
at org.hibernate.hql.spi.id.IdTableHelper.executeIdTableCreationStatements(IdTableHelper.java:67) [our-app.war:5.1.0.Final]
at org.hibernate.hql.spi.id.global.GlobalTemporaryTableBulkIdStrategy.finishPreparation(GlobalTemporaryTableBulkIdStrategy.java:125) [our-app.war:5.1.0.Final]
at org.hibernate.hql.spi.id.global.GlobalTemporaryTableBulkIdStrategy.finishPreparation(GlobalTemporaryTableBulkIdStrategy.java:42) [our-app.war:5.1.0.Final]
at org.hibernate.hql.spi.id.AbstractMultiTableBulkIdStrategyImpl.prepare(AbstractMultiTableBulkIdStrategyImpl.java:88) [our-app.war:5.1.0.Final]
at org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:451) [our-app.war:5.1.0.Final]
My current understanding is that when running on CICS and using JDBC Type 2 Drivers only some threads are capable of opening a DB2 connection. That would be the application threads (the ones processing HTTP requests) as well as worker threads servicing CICSExecutorService.
The current solution is to:
Disable JDBC metadata lookup in JdbcEnvironmentInitiator by
setting hibernate.temp.use_jdbc_metadata_defaults property to
false
Wrap execution of IdTableHelper#executeIdTableCreationStatements in a Runnable and submit it to CICSExecutorService.
Would you consider this solution to be sufficient and suitable for production? Or maybe you use some different approach?
Versions used:
CICS Transaction Server for z/OS 5.3.0
WebSphere Application Server 8.5.5.8
Hibernate 5.1.0
Update: Just to clarify, once our application is started, it can query DB2 with no problems (when servicing HTTP requests). The problem is only related to startup.

CICS TS v5.3 support for the JPA feature in Liberty was recently made available in a service-refresh (July 2016). Prior to that update, attempting to run JPA in applications would result in very similar problems to those you describe.
Although you are running hibernate and you are on a CICS-enabled thread, it does not have the API environment (which will allow the type 2 JDBC call to succeed). New detection logic was developed specifically (but not exclusively) for use with the DB2 JDBC type 2 driver and JPA. This update was shipped in a recent service refresh and might cure the issues you are seeing.
Try applying:
http://www-01.ibm.com/support/docview.wss?crawler=1&uid=swg1PI58375
The description says it is for 'Standard-mode Liberty' support, but it contains other developments as outlined above.

The following solution was tested to work ok.
The idea is to execute the SQL/DDL statements using CICSExecutorService#runAsCICS. The following extension is registered via hibernate.hql.bulk_id_strategy property.
package org.hibernate.hql.spi.id.global;
import java.util.concurrent.*;
import org.hibernate.boot.spi.MetadataImplementor;
import org.hibernate.engine.jdbc.connections.spi.JdbcConnectionAccess;
import org.hibernate.engine.jdbc.spi.JdbcServices;
import org.springframework.util.ClassUtils;
import com.ibm.cics.server.*;
public class CicsAwareGlobalTemporaryTableBulkIdStrategy extends GlobalTemporaryTableBulkIdStrategy {
#Override
protected void finishPreparation(JdbcServices jdbcServices, JdbcConnectionAccess connectionAccess, MetadataImplementor metadata, PreparationContextImpl context) {
execute(() -> super.finishPreparation(jdbcServices, connectionAccess, metadata, context));
}
#Override
public void release(JdbcServices jdbcServices, JdbcConnectionAccess connectionAccess) {
execute(() -> super.release(jdbcServices, connectionAccess));
}
private void execute(Runnable runnable) {
if (isCics() && IsCICS.getApiStatus() == IsCICS.CICS_REGION_BUT_API_DISALLOWED) {
RunnableFuture<Void> task = new FutureTask<>(runnable, null);
CICSExecutorService.runAsCICS(task);
try {
task.get();
} catch (InterruptedException | ExecutionException e) {
throw new RuntimeException("Failed to execute in a CICS API-enabled thread. " + e.getMessage(), e);
}
} else {
runnable.run();
}
}
private boolean isCics() {
return ClassUtils.isPresent("com.ibm.cics.server.CICSExecutorService", null);
}
}
Note that the newer JCICS API version has an overlaod for runAsCics method accepting a Callable, which might be useful to simplify the CICS branch of the execute method to something like this:
CICSExecutorService.runAsCICS(() -> { runnable.run(); return null; }).get();
A few alternatives tried:
Wrapping just the connection acquisition action (org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl#getConnection) did not work as the connection was closed already when it was used in the main thread.
Wrapping the whole application startup (org.springframework.web.context.ContextLoaderListener#contextInitialized) led to classloading issues.
Edit: Eventually went with a custom Hibernate's MultiTableBulkIdStrategy implementation that does not run any SQL/DDL on startup (see project page on GitHub).

Related

Statement.setQueryTimeout doesn't work on Oracle 18c jdbc driver

We have a J2EE application on a payara 5.2020 server that executes a long running query (PL/SQL that executes for a couple of hours).
To avoid a timeout exception, we use this sentence at StatementLevel:
statement.setQueryTimeout(0);
This works using Oracle jdbc drivers version 12c, but when we have migrated to Oracle 18c, and we changed the driver to the version 18c, the query execution stops after 15 minutes with this exception. The code works with Oracle 12 and Oracle 18 is the change in the driver's jar what brings up the problem.
The problem has been reproduced in Linux and Windows machines:
2021-06-14T07:50:01.762+0200|SEVERE: java.sql.SQLRecoverableException: Error de E/S: Socket read interrupted
at oracle.jdbc.driver.T4CCallableStatement.executeForRows(T4CCallableStatement.java:946)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1136)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3640)
at oracle.jdbc.driver.T4CCallableStatement.executeInternal(T4CCallableStatement.java:1318)
at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3752)
at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:4242)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1079)
at com.sun.gjc.spi.base.PreparedStatementWrapper.execute(PreparedStatementWrapper.java:532)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at com.sun.gjc.spi.jdbc40.ProfiledConnectionWrapper40$1.invoke(ProfiledConnectionWrapper40.java:437)
at com.sun.proxy.$Proxy324.execute(Unknown Source)
at org.apache.jsp.index_jsp.callPL(index_jsp.java:49)
at org.apache.jsp.index_jsp._jspService(index_jsp.java:108)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:750)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:411)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:473)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:377)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:750)
at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1636)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:259)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:757)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:577)
at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:99)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:158)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:371)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:238)
at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:520)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:217)
at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:182)
at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:156)
at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:218)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:95)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:260)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:177)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:109)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:88)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:53)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:524)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:89)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:94)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:33)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:114)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:569)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:549)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.io.InterruptedIOException: Socket read interrupted
at oracle.net.nt.TimeoutSocketChannel.handleInterrupt(TimeoutSocketChannel.java:262)
at oracle.net.nt.TimeoutSocketChannel.read(TimeoutSocketChannel.java:184)
at oracle.net.ns.NSProtocolNIO.doSocketRead(NSProtocolNIO.java:544)
at oracle.net.ns.NIOPacket.readHeader(NIOPacket.java:234)
at oracle.net.ns.NIOPacket.readPacketFromSocketChannel(NIOPacket.java:174)
at oracle.net.ns.NIOPacket.readFromSocketChannel(NIOPacket.java:122)
at oracle.net.ns.NIOPacket.readFromSocketChannel(NIOPacket.java:100)
at oracle.net.ns.NIONSDataChannel.readDataFromSocketChannel(NIONSDataChannel.java:86)
at oracle.jdbc.driver.T4CMAREngineNIO.prepareForUnmarshall(T4CMAREngineNIO.java:762)
at oracle.jdbc.driver.T4CMAREngineNIO.unmarshalUB1(T4CMAREngineNIO.java:427)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:394)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:255)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:610)
at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:249)
at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:82)
at oracle.jdbc.driver.T4CCallableStatement.executeForRows(T4CCallableStatement.java:924)
It seems the transport layer has been migrated to java.nio and the method setQueryTimeout is no longer
Things We've tried:
Setting the default Statement timeout to -1 in the JDBC Connection Pool Advanced Attributes screen in payara console.
Trying to set the time directly in the connection with conn.setNetworkTimeout(Executors.newFixedThreadPool(1), 0) didn't make any effect.
In different sources we have found that these properties below should affect the network timeout evaluation. We set them as JVM properties for payara startup (-Doracle.net.CONNECT_TIMEOUT=xxx) and as JDBC Connection Pool Properties, both cases with values 0 and -1. Didn't work in any case.
oracle.net.CONNECT_TIMEOUT
oracle.net.READ_TIMEOUT
oracle.jdbc.ReadTimeout
Sources:
Oracle 18c Net services best practices
Oracle 18c java jdbc reference. E.1.5 Using JDBC with Firewalls
4.- As we are accessing the DataSource through payara DataSource pool, we cannot cast the com.sun.gjc.spi.jdbc40.DataSource40 (class provided by payara) to an OracleDataSource, but we created the DataSorce programatically to set the connection properties as shown here and setting the properties seen in the image above but it doesn't work:
public static Properties oracleProperties() {
// Already tried -1 and 0
Properties properties = new Properties();
properties.put("Oracle.net.CONNECT_TIMEOUT", 0);
properties.put("Oracle.net.READ_TIMEOUT", 0);
properties.put("Oracle.jdbc.ReadTimeout", 0);
return properties;
}
public static OracleDataSource createDataSource() throws Exception {
OracleDataSource ods = new OracleDataSource();
ods.setURL("jdbc:oracle:thin:#itauc4602x:1521/BDExp");
ods.setUser("enevac");
ods.setPassword("enevac");
ods.setDataSourceName("OracleXADataSource");
ods.setLoginTimeout(0);
// default connection properties to avoid timeoutException
ods.setConnectionProperties(oracleProperties());
return ods;
}
Has anyone faced this problem, any idea on how to avoid the timeOut restriction?
Why 15 minutes?, according to the reference, the default value for oracle.net.ReadTimeout is 10 minutes.
Update:
To explain in more detail why I think the problem is in the driver and why I discard other possible origins of the exception, I assume the timeout can be raised from three sources:
Network timeout: I discard it cause I'm testing a payara server in my local machine against the development database, with no firewall in between.
Database server: the DBA has checked the Oracle net services configuration and there's no limit set that explains the 15 minutes cut. Besides, in these case, an SQLException would be expected with some kind of ORA-xxx error code.
JDBC: this can be set at connection level, statement level and transaction level. As I said at the beginning, the code works with oracle 12c drivers against Oracle 12 and Oracle 18 servers, it was the change of the driver's jar what make the code stop working.
Finally the problem was fixed configuring in the payara pool the "connectionProperties" custom property of the OracleDataSource. As #ibre5041 pointed, setting the property oracle.jdbc.javaNetNio=false changes the transport layer used by the driver and it starts working as the oracle 12c previous version.
According to Oracle reference, the OracleDataSource implementors can receive the connection properties as a java.util.Properties object.
Table 8-2 Oracle Extended Data Source Properties
Name: connectionProperties
Type: java.util.Properties
Description: Specifies the connection properties.
To set a multivalued property to the jdbc pool in the Payara Admin Console, you have to set the properties as (prop1=value1,prop2=value2), (Thank you again Ondro Mihályi). So in our case we set:
connectionProperties = (oracle.jdbc.ReadTimeout=0, oracle.jdbc.javaNetNio=false)
As a summary of what works and doesn't using Oracle 18c jdbc driver (every step tested separately)::
Setting timeout at statement level doesn't work:
statement.setQueryTimeout(0)
Setting timeout at connection level, with -1 or 0, doesn't work:
conn.setNetworkTimeout(Executors.newFixedThreadPool(1), timeout in ms)
Setting timeout properties programmatically in the java.util.connection using the OracleDataSource makes it work as indicated in the question.
Setting timeout properties as JVM properties makes it work if the limit is below 15min, but if you set a value > 15 minutes the exceptions is thrown, so setting to 0 or -1 has no effect:
So this makes the query stop after 10 secs:
-Doracle.net.CONNECT_TIMEOUT=10000 -Doracle.net.READ_TIMEOUT=10000 -Doracle.jdbc.ReadTimeout=10000
But with this stops after 15 minutes:
-Doracle.net.CONNECT_TIMEOUT=-1 -Doracle.net.READ_TIMEOUT=-1 -Doracle.jdbc.ReadTimeout=-1
Setting oracle.net.keepAlive=true as JVM property as #Nirmala suggested doesn’t work.
Setting oracle.jdbc.javaNetNio=false as JVM property as #ibre5041 makes it work. So it points to some problem with the java.nio transport layer.
Anyway, we opened a support issue to Oracle, cause the jdbc api statement.setQueryTimeout(0) should work without having to configure the datasource, I'll put the response when the case is closed.
The query execution could be stopped because of default tcp connection timeout. Can you set keepalive property "oracle.net.keepAlive" to “true” and verify?

Is there a configuration for removing LiquiBase DATABASE CHANGELOGLOCK automatically after a certain time or on app restart?

We have SpringBoot 2 driven HA java application in which we use PostgreSQL underneath.
For certain reasons like unexpected crashes or Exceptions, Liquibase ends up with a stale DATABASECHANGELOGLOCK which was never released.
This results in subsequent deployments of the app failing with app waiting for the change lock and then exiting as follows:
2020-03-04T11:10:31.78+0200 SELECT LOCKED FROM public.databasechangeloglock WHERE ID=1
2020-03-04T11:10:31.78+0200 Waiting for changelog lock....
2020-03-04T11:10:32.87+0200 SELECT LOCKED FROM public.databasechangeloglock WHERE ID=1
2020-03-04T11:10:32.87+0200 Waiting for changelog lock....
2020-03-04T11:10:41.78+0200 SELECT LOCKED FROM public.databasechangeloglock WHERE ID=1
2020-03-04T11:10:41.78+0200 Waiting for changelog lock....
2020-03-04T11:10:42.87+0200 SELECT LOCKED FROM public.databasechangeloglock WHERE ID=1
2020-03-04T11:10:42.87+0200 Waiting for changelog lock....
2020-03-04T11:10:51.79+0200 SELECT LOCKED FROM public.databasechangeloglock WHERE ID=1
2020-03-04T11:10:51.79+0200 Waiting for changelog lock....
2020-03-04T11:10:52.88+0200 SELECT LOCKED FROM public.databasechangeloglock WHERE ID=1
2020-03-04T11:10:52.88+0200 Waiting for changelog lock....
2020-03-04T11:10:54.00+0200 ERR 2020-03-04 09:10:54.010 UTC
2020-03-04T11:10:55.88+0200 [HEALTH/0] ERR Failed to make TCP connection to port 8080: connection refused
2020-03-04T11:10:55.88+0200 [CELL/0] ERR Failed after 1m0.626s: readiness health check never passed.
2020-03-04T11:10:55.89+0200 [CELL/SSHD/0] OUT Exit status 0
2020-03-04T11:10:55.89+0200 info [native] Initiating shutdown sequence for Java agent
2020-03-04T11:10:55.89+0200 info [] Connection Status (120 times 300s) : 0909
Is there a configuration for removing Liquibase DATABASECHANGELOGLOCK automatically after a certain time or removing it on application start if it is older than let's say 5 mins or a predefined time period.
Or can this be done programatically at App Start before Postgres starts looking for the change lock.
So I was able to achieve this via the following approach:
We initialise LiquiBase using a SpringLiquibase bean.
Within this bean, before the Liquibase instance is constructed, I called a method which uses Statements to query the database for a lock, and if there are any locks older than 5 minutes, we delete them.
#Bean
public SpringLiquibase liquibase(DataSource dataSource) {
// Added a hook to check for locks before LiquiBase initialises.
removeDBLock(dataSource);
//
SpringLiquibase liquibase = new SpringLiquibase();
liquibase.setChangeLog(Constants.DDL_XML);
liquibase.setDataSource(dataSource);
return liquibase;
}
private void removeDBLock(DataSource dataSource) {
//Timestamp, currently set to 5 mins or older.
final Timestamp lastDBLockTime = new Timestamp(System.currentTimeMillis() - (5 * 60 * 1000));
final String query = format("DELETE FROM DATABASECHANGELOGLOCK WHERE LOCKED=true AND LOCKGRANTED<'%s'", lastDBLockTime.toString());
try (Statement stmt = dataSource.getConnection().createStatement()) {
int updateCount = stmt.executeUpdate(query);
if(updateCount>0){
log.error("Locks Removed Count: {} .",updateCount);
}
} catch (SQLException e) {
log.error("Error! Remove Change Lock threw and Exception. ",e);
}
}
The default lock implementation provided by Liquibase uses a database table called 'DATABASECHANGELOGLOCK'. Once a process that has acquired the lock is unexpectedly terminated, the only way to recover is to manually release that lock (using the Liquibase CLI or using a SQL statement). Please take a look at this Liquibase extension, which replaces the StandardLockService, by using database locks: https://github.com/blagerweij/liquibase-sessionlock
This extension uses MySQL or Postgres user lock statements, which are automatically released when the database connection is closed (e.g. when the container is stopped unexpectedly). The only thing required to use the extension is to add a dependency to the library. Liquibase will automatically detect the improved LockService.
I'm not the author of the library, but I stumbled upon the library when I was searching for a solution. I helped the author by releasing the library to Maven central. Currently supports MySQL and PostgreSQL, but should be fairly easy to support other RDBMS.

trying to find db connection leak in my code, using Spring / JPA / Hikari

I've got a problem with a Spring web application that periodically runs into an error fetching a connection from my connection pool. Eventually in the logs I see entries like:
Caused by: javax.persistence.PersistenceException: org.hibernate.exception.JDBCConnectionException: Unable to acquire JDBC Connection
Caused by: java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30000ms.
Only way to recover I've found once it hits this point is to restart Tomcat.
I think the most likely explanation is I have some code somewhere that is not properly cleaning up its connection - returning it to Hikari, leaving something open so Spring can't clean it up, etc.
To troubleshoot I've set my hikari config leakDetectionThreshold to 5000ms and enabled logging. After that, I see log entries like
2018-04-24 19:53:56 WARN ProxyLeakTask:87 - Connection leak detection
triggered for org.postgresql.jdbc.PgConnection#664ec666, stack trace
follows
java.lang.Exception: Apparent connection leak detected
at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:122)
at org.hibernate.internal.NonContextualJdbcConnectionAccess.obtainConnection(NonContextualJdbcConnectionAccess.java:35)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.acquireConnectionIfNeeded(LogicalConnectionManagedImpl.java:99)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.getPhysicalConnection(LogicalConnectionManagedImpl.java:129)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl.connection(StatementPreparerImpl.java:47)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$5.doPrepare(StatementPreparerImpl.java:146)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$StatementPreparationTemplate.prepareStatement(StatementPreparerImpl.java:172)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl.prepareQueryStatement(StatementPreparerImpl.java:148)
at org.hibernate.loader.Loader.prepareQueryStatement(Loader.java:1940)
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1909)
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1887)
at org.hibernate.loader.Loader.doQuery(Loader.java:932)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:349)
at org.hibernate.loader.Loader.doList(Loader.java:2615)
at org.hibernate.loader.Loader.doList(Loader.java:2598)
at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2430)
at org.hibernate.loader.Loader.list(Loader.java:2425)
at org.hibernate.loader.custom.CustomLoader.list(CustomLoader.java:335)
at org.hibernate.internal.SessionImpl.listCustomQuery(SessionImpl.java:2129)
at org.hibernate.internal.AbstractSharedSessionContract.list(AbstractSharedSessionContract.java:981)
at org.hibernate.query.internal.NativeQueryImpl.doList(NativeQueryImpl.java:147)
at org.hibernate.query.internal.AbstractProducedQuery.list(AbstractProducedQuery.java:1398)
at org.hibernate.query.internal.AbstractProducedQuery.getSingleResult(AbstractProducedQuery.java:1444)
at sun.reflect.GeneratedMethodAccessor191.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.orm.jpa.SharedEntityManagerCreator$DeferredQueryInvocationHandler.invoke(SharedEntityManagerCreator.java:379)
at com.sun.proxy.$Proxy163.getSingleResult(Unknown Source)
at com.mycompany.web.jpa.util.DBHelper.getPagedMappedDbResults(DBHelper.java:76)
at com.mycompany.web.jpa.repository.TaskRepositoryImpl.findTaskDetailsByStepIdAndIdIn(TaskRepositoryImpl.java:245)
......
So it is detecting a possible leak. Could be a false positive I suppose? But this is also the only class in my app that is doing database access outside of the standard service/repository pattern often used in Spring apps, so it seems like a likely culprit, and it's my best lead at the moment.
Anyway, the last piece of non library code I see in the trace (ie stuff I wrote, so most likely to be the cause of the leak!) is my DBHelper::getPagedMappedDbResults method, relevant bit included here:
Query q = entityManager.createNativeQuery(countQueryText);
setQueryParameters(q, parameters);
long numActualResults = 0;
try {
numActualResults = ((Number)q.getSingleResult()).longValue(); // line 76
} catch (Exception e) {
System.out.println("just in case: " + e);
}
So basically I create a Query object from my EntityManager instance, set some parameters, and run it to get some results.
Is there something I need to be doing with a Query object when I'm done with it? q.cleanup()? I don't see anything like this from reading the docs, but am I not doing good housekeeping on this resource?
The entityManager itself is created from an #Autowired annotation. My understanding is if I didn't "new" it to instantiate it and instead let the Spring framework autowire it, then Spring will do whatever cleanup is necessary. Is that right? Or do I need to be doing some cleanup after I use the entityManager?
Version details:
Tomcat 8 / Java 8
Spring 5.0.0.RELEASE
Spring Data Kay-RELEASE
Hibernate 5.2.3.Final
Hikari 2.4.5
Any advice or suggestions would be greatly appreciated, thanks!
What is the query? Is it heavy? Maybe you have deadlock here? Connection management looks fine. You do not acquire connection explicitly, so no need to release it. The query might be long running so Hibernate is not able to complete it and release the connection.
Also, you can check the number of open connections on the DB side. Do some analysis on that side as well.

Acquired Connections grows abruptly leading to server crash

I have a Java application that schedules a cron job after every 1 min. It runs on Glassfish 4. We are using Hibernate with JTA Entity Manager which is container managed for executing the queries on SQL Server database.
JDBC Connection Pool Settings are:
Initial and Minimum Pool Size:16
Maximum Pool Size:64
Pool Resize Quantity:4
Idle Timeout:300
Max Wait Time:60000
JDBC Connection Pool Statistics after 22 Hours run:
NumConnUsed 0count
NumConnAcquired 14404count
NumConnReleased 14404count
NumConnCreated 16count
NumConnFree 16count
The number of acquired connections keeps on incrementing and the Glassfish 4 crashes after around 10 days with below exception.
RAR5117 : Failed to obtain/create connection from connection pool [ com.beonic.tiv5 ]. Reason : com.sun.appserv.connectors.internal.api.PoolingException: java.lang.RuntimeException: Got exception during XAResource.start:
Please suggest how to avoid Glassfish crash.
finally
{
em = null;
ic = null;
}
I think here is the problem you are never commiting or closing the transacction
Giving this example and documentation of JTA check 5.2.2
// BMT idiom
#Resource public UserTransaction utx;
#Resource public EntityManagerFactory factory;
public void doBusiness() {
EntityManager em = factory.createEntityManager();
try {
// do some work
...
utx.commit();
}
catch (RuntimeException e) {
if (utx != null) utx.rollback();
throw e; // or display error message
}
finally {
em.close();
}
This is the correct way of doing a transacction. But you are only nulling the values and nothing more, that's why you your pools and not being closed
Here is more documentation about Transactions
It's hard to tell what is the real cause of the problem, but the problem might be that all your connections have become stale because not used for a long time.
It is a good practice to set up connection validation, which ensures that connections are reopened when closed by the external server.
There is a thorough article about connection pools in Glassfish/Payara, checkout especially the section about Connection validation (using Derby DB in the example):
To turn on connection validation :
asadmin set
resources.jdbc-connection-pool.test-pool.connection-validation-method=custom-validation
asadmin set
resources.jdbc-connection-pool.test-pool.validation-classname=
org.glassfish.api.jdbc.validation.DerbyConnectionValidation
asadmin
set
resources.jdbc-connection-pool.test-pool.is-connection-validation-required=true

How to avoid DB2 driver Classloader Memory leak on Tomcat application .war file redeployment

IBM's well supported JDBC driver creates a memory leak in combination with Tomcat's well supported connection pool.
Please refer to Classloader memory leak on Tomcat application .war file redeployment.
java.lang.IllegalStateException: Illegal access: this web application instance has been stopped already. Could not load [DB2JccConfiguration.properties]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading(WebappClassLoaderBase.java:1327)
at org.apache.catalina.loader.WebappClassLoaderBase.getResource(WebappClassLoaderBase.java:1023)
at com.ibm.db2.jcc.am.ud.run(Unknown Source)
at java.security.AccessController.doPrivileged(AccessController.java:285)
at com.ibm.db2.jcc.am.GlobalProperties.a(Unknown Source)
at com.ibm.db2.jcc.am.GlobalProperties.d(Unknown Source)
at com.ibm.db2.jcc.am.mq.run(Unknown Source)
at java.util.TimerThread.mainLoop(Timer.java:567)
at java.util.TimerThread.run(Timer.java:517)
I do not understand the suggested solution as it is in conflict with the most recommended practice of including the driver jar in the Tomcat lib directory.
We need shared deployment and re-deployment without Tomcat re-start. Please share your solution here if you have experience with this software combination and the described issue.
For driver version 4.22.29, I'm currently using this code in a ServletContextListener:
#Override
public void contextDestroyed(ServletContextEvent servletContextEvent) {
// This fixes the JDBC driver not unloading corectly on a context reload for DB2 JDBC 4.22.29
try {
logger.debug("Trying to stop the timer");
new com.ibm.db2.jcc.am.iq() {
// instance initializer to execute the fix when the anonymous class is instantiated, i.e. now
{
if (a != null) {
a.cancel();
} else {
logger.debug("Timer is null, skipped");
}
}
};
logger.debug("Stopped the timer");
} catch (Exception e) {
logger.error("Could not stop the DB2 timer thread", e);
}
}
Note: Since the DB2 driver JAR appears to be obfuscated, the timer storage (com.ibm.db2.jcc.am.iq.a) will probably be different for other driver versions. Also, the constructor of the class you're subclassing might have side effects, in my case there are none.
How I got to this solution
Get the exception
java.lang.NullPointerException
at org.apache.catalina.loader.WebappClassLoaderBase.getResource(WebappClassLoaderBase.java:1600)
at com.ibm.db2.jcc.am.wd.run(wd.java:49)
at java.security.AccessController.doPrivileged(Native Method)
at com.ibm.db2.jcc.am.GlobalProperties.a(GlobalProperties.java:146)
at com.ibm.db2.jcc.am.GlobalProperties.d(GlobalProperties.java:100)
at com.ibm.db2.jcc.am.dr.run(dr.java:124) <------- point of interest <----------
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
The main class for the timer is com.ibm.db2.jcc.am.dr.
Using IntelliJ, I set a breakpoint in its constructor. Wait for the breakpoint to hit:
Go to where it's instantiated, in my case in GlobalProperties. Look on what timer it is scheduled.
Find a way to access iq.a: Since this is a static protected field, we can inherit from iq and from inside that class, access the static field of the parent class to call cancel() on a.
This is a confirmed bug in the IBM JDBC driver version 4.19 (timer task that cannot be disabled). The workaround is to downgrade to version 4.18.
Fix for version 4.19.66:
public void contextDestroyed(ServletContextEvent servletContextEvent) {
// This fixes the JDBC driver not unloading corectly on a context reload for DB2 JDBC 4.19.66
try {
System.out.println("Trying to stop the DB2 timer thread");
new com.ibm.db2.jcc.am.tp() {
{
if (a != null) {
a.cancel();
} else {
System.out.println("Timer is null, skipped");
}
}
};
System.out.println("Stopped the timer");
} catch (Exception e) {
System.out.println("Could not stop the DB2 timer thread " + e.getMessage());
}
}

Categories