Force Hikari/Hibernate to close stale (leaked?) connections - java

I'm working with a FileMaker 16 datasource through the official JDBC driver in Spring Boot 2 with Hibernate 5.3 and Hikari 2.7.
The FileMaker server performance is poor, a SQL query execution time can reach a minute for big tables. Sometimes it results in connection leaking, when the connection pool is full of active connections which are never released.
The question is how to force active connections in the pool which have been hanging there say for two minutes to close, moving them to idle and making available for using again.
As an example, I'm accessing the FileMaker datasource through a RestController using the findAll method in org.springframework.data.repository.PagingAndSortingRepository:
#RestController
public class PatientController {
#Autowired
private PatientRepository repository;
#GetMapping("/patients")
public Page<Patient> find(Pageable pageable) {
return repository.findAll(pageable);
}
}
Calling /patients a few times in a raw causes connection leaking, here's what Hikari reports:
2018-09-20 13:49:00.939 DEBUG 1 --- [l-1 housekeeper]
com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Pool stats
(total=10, active=10, idle=0, waiting=2)
It also throws exceptions like this:
java.lang.Exception: Apparent connection leak detected
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:128) ~[HikariCP-2.7.9.jar!/:na]
What I need is if repository.findAll takes more than N seconds, the connection must be killed and the controller method must throw and exception. How to achieve it?
Here's my Hikari config:
allowPoolSuspension.............false
autoCommit......................true
catalog.........................none
connectionInitSql...............none
connectionTestQuery............."SELECT COUNT(*) FROM Clinics"
connectionTimeout...............30000
dataSource......................none
dataSourceClassName.............none
dataSourceJNDI..................none
dataSourceProperties............{password=<masked>}
driverClassName................."com.filemaker.jdbc.Driver"
healthCheckProperties...........{}
healthCheckRegistry.............none
idleTimeout.....................600000
initializationFailFast..........true
initializationFailTimeout.......1
isolateInternalQueries..........false
jdbc4ConnectionTest.............false
jdbcUrl.........................jdbc:filemaker://***:2399/ec_data
leakDetectionThreshold..........90000
maxLifetime.....................1800000
maximumPoolSize.................10
metricRegistry..................none
metricsTrackerFactory...........none
minimumIdle.....................10
password........................<masked>
poolName........................"HikariPool-1"
readOnly........................false
registerMbeans..................false
scheduledExecutor...............none
scheduledExecutorService........internal
schema..........................none
threadFactory...................internal
transactionIsolation............default
username........................"CHC"
validationTimeout...............5000

HikariCP focuses on just connection pool management to managing the connections that it has formed from it.
loginTimeout - how long HikariCP will wait for a connection to be formed to the database (basically a JDBC connection)
spring.datasource.hikari.connectionTimeout=30000
maxLifetime - how long a connection will live in the pool before being closed
spring.datasource.hikari.maxLifetime=1800000
idleTimeout - how long an unused connection lives in the pool
spring.datasource.hikari.idleTimeout=30000
Use javax.persistence.query.timeout to cancel the request if it takes longer than defined timeout.
javax.persistence.query.timeout (Long – milliseconds)
The javax.persistence.query.timeout hint defines how long a query is
allowed to run before it gets canceled. Hibernate doesn’t handle this
timeout itself but provides it to the JDBC driver via the JDBC
Statement.setTimeout method.

The filemaker JDBC driver ignores the javax.persistence.query.timeout parameter, even though the timeout value is set in the driver's implementation of the java.sql.setQueryTimeout setter. So I resolved the problem by extending the class com.filemaker.jdbc.Driver and overriding the connect method, so that it adds the sockettimeout parameter to the connection properties. Having this param in place, the FM JDBC driver interrupts the connection if no data have been coming from the socket for the timeout period.
I've also filed an issue with filemaker: https://community.filemaker.com/message/798471

Related

Hikari pool connection issue avoid db access

We have microservice based system where one dedicated service does all db related calls (db-reader) to MySQL.
There are open circuit errors in other service to db-reader service time to time.
We found there are Hikari pool connection closing/opening operations happened during this time period.
08:39:25.312
2022-03-28 08:39:25,311 [HikariPool-19 connection closer] DEBUG com.zaxxer.hikari.pool.PoolBase - HikariPool-19 - Closing connection com.mysql.cj.jdbc.ConnectionImpl#66fd15ad: (connection is evicted or dead)
2022-03-28 08:58:25,396 [HikariPool-19 connection closer] DEBUG com.zaxxer.hikari.pool.PoolBase - HikariPool-19 - Closing connection com.mysql.cj.jdbc.ConnectionImpl#413992c9: (connection has passed maxLifetime)
08:58:25.400
2022-03-28 08:58:25,399 [HikariPool-19 connection adder] DEBUG com.zaxxer.hikari.pool.HikariPool - HikariPool-19 - Added connection com.mysql.cj.jdbc.ConnectionImpl#759ad492
In db-reader service configurations we have :
hikariConfig.setConnectionTimeout(30000);
hikariConfig.setIdleTimeout(35000);
hikariConfig.setMaxLifetime(45000);
As log suggest, connections are closed due to maxLifetime, but why other services get open circuit when one connection is dead from connection pool? (connection pool size is 50)
Is there a way to avoid this happening?
try
setConnectionTimeout(15_000); //15sec
setIdleTimeout(60_000); //1min
setMaxLifetime(3_00_000); //5min
IdleTimeout can be zero OR atleast half of MaxLifetime.
also check timeout setting of framework/vpn/device/network/db that closes the connection earlier than your pool timeout setting.

(This connection has been closed.). Possibly consider using a shorter maxLifetime value when using hikari to connect PostgreSQL

I am using Hikari to connect the PostgreSQL 13 database in Spring Boot project, now the logs shows this waring:
[12:23:49:633] [WARN] - com.zaxxer.hikari.pool.PoolBase.isConnectionAlive(PoolBase.java:158) - HikariPool-1 - Failed to validate connection org.postgresql.jdbc.PgConnection#a54a5357 (This connection has been closed.). Possibly consider using a shorter maxLifetime value.
[12:23:49:634] [WARN] - com.zaxxer.hikari.pool.PoolBase.isConnectionAlive(PoolBase.java:158) - HikariPool-1 - Failed to validate connection org.postgresql.jdbc.PgConnection#5d8a4eb4 (This connection has been closed.). Possibly consider using a shorter maxLifetime value.
[12:23:49:636] [WARN] - com.zaxxer.hikari.pool.PoolBase.isConnectionAlive(PoolBase.java:158) - HikariPool-1 - Failed to validate connection org.postgresql.jdbc.PgConnection#18564799 (This connection has been closed.). Possibly consider using a shorter maxLifetime value.
first step I tweak the PostgreSQL 13 database idle_in_transaction_session_timeout parameter like this:
alter system set idle_in_transaction_session_timeout='30min';
show idle_in_transaction_session_timeout;
I make sure the PostgreSQL 13 database idle_in_transaction_session_timeout parameter was turned to 30min. The next step I tweak my maxLifeTime of Hikari in application.properties like this:
spring.datasource.hikari.max-lifetime=900000
the Hikari maxLifeTime already less than database connection idle connection time, but the warning message did not disappeared. Am I missing something? what should I do to fix the warning message?

multiple Threads read to the same table in database by using the same connection in java?

I have defined three transaction in which select operations and SELECT operations are happening on the different parameter passed . I try to invoke this method concurrently . I am get an error:
o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 0, SQLState: null
Aug 25, 2020 # 12:16:39.000 2020-08-25 06:46:39.388 ERROR 1 --- [o-9003-exec-630] o.h.engine.jdbc.spi.SqlExceptionHelper : Hikari - Connection is not available, request timed out after 60000ms.
And sometimes
org.postgresql.util.PSQLException: FATAL: remaining connection slots are reserved for non-replication superuser connections
I am new to java. Please guide me to solve this issue. Do I need to write multithreading to access number of resources or configuration issue?
hikari:
poolName: Hikari
autoCommit: false
minimumIdle: 5
connectionTimeout: 60000
maximumPoolSize: 80
idleTimeout: 60000
maxLifetime: 240000
leakDetectionThreshold: 300000
Multiple Threads read to the same table in database by using the same connection in java?
This is generally speaking not going to work. The JDBC API types Connection, Statement, ResultSet and so on are not generally thread-safe1. You should not try to use on instance in multiple threads.
If you want to avoid having multiple connections open the normal approach is to use a JDBC connection pool to manage the connections. When a thread needs to talk to the database, it gets a connection from the pool. When it has finished talking to the database, it releases it back to the pool.
In the PostgreSQL / Hikari case:
For PostgreSQL - "Using the driver in a multi-threaded or a servlet environment"
For Hikari - the getConnection() call is thread-safe, but I couldn't find anything that explicitly talked about the thread-safety of the connection object when shared by multiple threads.
1 - I have seen it stated that a spec compliant JDBC driver should be thread-safe, but I could not see where the JDBC spec actually requires this to be so. But even assuming that it does say that somewhere, the threads sharing a connection would need to coordinate very carefully to avoid things like one thread causing another thread's resultset to "spontaneously" close.

Spring boot metrics shows HikariCP connection creation count 1, when HikariCP debug log's connection total is 2

I use Spring-boot version 2.0.2 to make web application with default connection pool HikariCP.
HikariCP debug log shows collect connection size like 2, but spring boot metrics show connection creation is 1.
Did I misunderstand?
Thanks in advance.
application.yml is the below
spring:
datasource:
minimum-idle: 2
maximum-pool-size: 7
Log:
DEBUG 8936 --- [l-1 housekeeper] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - After cleanup stats (total=2, active=0, idle=2, waiting=0)
URL for metrics:http://localhost:8080/xxx/metrics/hikaricp.connections.creation
Response:
{
name: "hikaricp.connections.creation",
measurements:
[
{
statistic: "COUNT",
value: 1 <--- I think this should be 2
},
...
]
}
What you are seeing is HikariCPs failfast check behaviour with regards to tracking metrics at this stage.
(I actually dug into this as I didn't know the answer beforehand)
At this stage a MetricsTracker isn't set yet and thus the initial connection creation isn't counted. In case the initial connection could be established, HikariCP just keeps this connection. In your case only the next connection creation is counted.
In case you really want the metric value to be "correct" you can set spring.datasource.hikari.initialization-fail-timeout=-1. The behaviour is described in HikariCPs README under initializationFailTimeout.
If you really need a "correct" value is debatable as you'll only miss that initial count. Ideally you'll want to reason about connection creation counts in a specific time window - e.g. rate of connection creations per minute to determine if you dispose connections too early from the pool.

Query on ThreadSafeClientConnManager (Apache HttpClient 4.1.1)

Am using ThreadSafeClientConnManager
Apache HttpComponenets-Client4.1.1 for my connection pool.
When am releasing the connection back to the pool i say:
cm.releaseConnection(client,-1,TimeUnit.SECONDS);
cm.closeExpiredConnections(); cm.closeIdleConnections(20,
TimeUnit.SECONDS);
[Here cm is object of ThreadSafeClientConnManager]
and as mentioned in the javadoc releaseConnection(ManagedClientConnection conn, long validDuration,TimeUnit timeUnit) am setting valid duration to -ve (<=0) value.
But when i see server logs i find that:
org.apache.http.impl.conn.DefaultClientConnection] Connection shut down
2011-08-17 14:12:48.992 DEBUG Other Thread-257 org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager Released connection is not reusable.
2011-08-17 14:12:48.992 DEBUG Other Thread-257 org.apache.http.impl.conn.tsccm.ConnPoolByRoute Releasing connection [HttpRoute[{}->http://server-name:port][null]
2011-08-17 14:12:48.992 DEBUG Other Thread-257 [org.apache.http.impl.conn.tsccm.ConnPoolByRoute] Notifying no-one, there are no waiting threads
2011-08-17 14:12:48.993 DEBUG Other Thread-257 [org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager] Closing expired connections
2011-08-17 14:12:48.993 DEBUG Other Thread-257 [shaded.org.apache.http.impl.conn.tsccm.ConnPoolByRoute] Closing expired connections
2011-08-17 14:12:48.993 DEBUG Other Thread-257 [shaded.org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager] Closing connections idle longer than 20 SECONDS
Here i see in the logs that "Released connection is not reusable"
Does that mean that "-1 " is not making the connection reusable and connections are closed instead of returning to pool?
If so can any one please suggest how can i make it reusable.
Thanks in advance.
Per default HTTP connection released back to the manager are considered non-reusable. If a connection remains in a consistent state it should be marked as re-usable by using ManagedClientConnection#markReusable() prior to being released back to the manager.

Categories