Hikari pool connection issue avoid db access - java

We have microservice based system where one dedicated service does all db related calls (db-reader) to MySQL.
There are open circuit errors in other service to db-reader service time to time.
We found there are Hikari pool connection closing/opening operations happened during this time period.
08:39:25.312
2022-03-28 08:39:25,311 [HikariPool-19 connection closer] DEBUG com.zaxxer.hikari.pool.PoolBase - HikariPool-19 - Closing connection com.mysql.cj.jdbc.ConnectionImpl#66fd15ad: (connection is evicted or dead)
2022-03-28 08:58:25,396 [HikariPool-19 connection closer] DEBUG com.zaxxer.hikari.pool.PoolBase - HikariPool-19 - Closing connection com.mysql.cj.jdbc.ConnectionImpl#413992c9: (connection has passed maxLifetime)
08:58:25.400
2022-03-28 08:58:25,399 [HikariPool-19 connection adder] DEBUG com.zaxxer.hikari.pool.HikariPool - HikariPool-19 - Added connection com.mysql.cj.jdbc.ConnectionImpl#759ad492
In db-reader service configurations we have :
hikariConfig.setConnectionTimeout(30000);
hikariConfig.setIdleTimeout(35000);
hikariConfig.setMaxLifetime(45000);
As log suggest, connections are closed due to maxLifetime, but why other services get open circuit when one connection is dead from connection pool? (connection pool size is 50)
Is there a way to avoid this happening?

try
setConnectionTimeout(15_000); //15sec
setIdleTimeout(60_000); //1min
setMaxLifetime(3_00_000); //5min
IdleTimeout can be zero OR atleast half of MaxLifetime.
also check timeout setting of framework/vpn/device/network/db that closes the connection earlier than your pool timeout setting.

Related

(This connection has been closed.). Possibly consider using a shorter maxLifetime value when using hikari to connect PostgreSQL

I am using Hikari to connect the PostgreSQL 13 database in Spring Boot project, now the logs shows this waring:
[12:23:49:633] [WARN] - com.zaxxer.hikari.pool.PoolBase.isConnectionAlive(PoolBase.java:158) - HikariPool-1 - Failed to validate connection org.postgresql.jdbc.PgConnection#a54a5357 (This connection has been closed.). Possibly consider using a shorter maxLifetime value.
[12:23:49:634] [WARN] - com.zaxxer.hikari.pool.PoolBase.isConnectionAlive(PoolBase.java:158) - HikariPool-1 - Failed to validate connection org.postgresql.jdbc.PgConnection#5d8a4eb4 (This connection has been closed.). Possibly consider using a shorter maxLifetime value.
[12:23:49:636] [WARN] - com.zaxxer.hikari.pool.PoolBase.isConnectionAlive(PoolBase.java:158) - HikariPool-1 - Failed to validate connection org.postgresql.jdbc.PgConnection#18564799 (This connection has been closed.). Possibly consider using a shorter maxLifetime value.
first step I tweak the PostgreSQL 13 database idle_in_transaction_session_timeout parameter like this:
alter system set idle_in_transaction_session_timeout='30min';
show idle_in_transaction_session_timeout;
I make sure the PostgreSQL 13 database idle_in_transaction_session_timeout parameter was turned to 30min. The next step I tweak my maxLifeTime of Hikari in application.properties like this:
spring.datasource.hikari.max-lifetime=900000
the Hikari maxLifeTime already less than database connection idle connection time, but the warning message did not disappeared. Am I missing something? what should I do to fix the warning message?

Force Hikari/Hibernate to close stale (leaked?) connections

I'm working with a FileMaker 16 datasource through the official JDBC driver in Spring Boot 2 with Hibernate 5.3 and Hikari 2.7.
The FileMaker server performance is poor, a SQL query execution time can reach a minute for big tables. Sometimes it results in connection leaking, when the connection pool is full of active connections which are never released.
The question is how to force active connections in the pool which have been hanging there say for two minutes to close, moving them to idle and making available for using again.
As an example, I'm accessing the FileMaker datasource through a RestController using the findAll method in org.springframework.data.repository.PagingAndSortingRepository:
#RestController
public class PatientController {
#Autowired
private PatientRepository repository;
#GetMapping("/patients")
public Page<Patient> find(Pageable pageable) {
return repository.findAll(pageable);
}
}
Calling /patients a few times in a raw causes connection leaking, here's what Hikari reports:
2018-09-20 13:49:00.939 DEBUG 1 --- [l-1 housekeeper]
com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Pool stats
(total=10, active=10, idle=0, waiting=2)
It also throws exceptions like this:
java.lang.Exception: Apparent connection leak detected
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:128) ~[HikariCP-2.7.9.jar!/:na]
What I need is if repository.findAll takes more than N seconds, the connection must be killed and the controller method must throw and exception. How to achieve it?
Here's my Hikari config:
allowPoolSuspension.............false
autoCommit......................true
catalog.........................none
connectionInitSql...............none
connectionTestQuery............."SELECT COUNT(*) FROM Clinics"
connectionTimeout...............30000
dataSource......................none
dataSourceClassName.............none
dataSourceJNDI..................none
dataSourceProperties............{password=<masked>}
driverClassName................."com.filemaker.jdbc.Driver"
healthCheckProperties...........{}
healthCheckRegistry.............none
idleTimeout.....................600000
initializationFailFast..........true
initializationFailTimeout.......1
isolateInternalQueries..........false
jdbc4ConnectionTest.............false
jdbcUrl.........................jdbc:filemaker://***:2399/ec_data
leakDetectionThreshold..........90000
maxLifetime.....................1800000
maximumPoolSize.................10
metricRegistry..................none
metricsTrackerFactory...........none
minimumIdle.....................10
password........................<masked>
poolName........................"HikariPool-1"
readOnly........................false
registerMbeans..................false
scheduledExecutor...............none
scheduledExecutorService........internal
schema..........................none
threadFactory...................internal
transactionIsolation............default
username........................"CHC"
validationTimeout...............5000
HikariCP focuses on just connection pool management to managing the connections that it has formed from it.
loginTimeout - how long HikariCP will wait for a connection to be formed to the database (basically a JDBC connection)
spring.datasource.hikari.connectionTimeout=30000
maxLifetime - how long a connection will live in the pool before being closed
spring.datasource.hikari.maxLifetime=1800000
idleTimeout - how long an unused connection lives in the pool
spring.datasource.hikari.idleTimeout=30000
Use javax.persistence.query.timeout to cancel the request if it takes longer than defined timeout.
javax.persistence.query.timeout (Long – milliseconds)
The javax.persistence.query.timeout hint defines how long a query is
allowed to run before it gets canceled. Hibernate doesn’t handle this
timeout itself but provides it to the JDBC driver via the JDBC
Statement.setTimeout method.
The filemaker JDBC driver ignores the javax.persistence.query.timeout parameter, even though the timeout value is set in the driver's implementation of the java.sql.setQueryTimeout setter. So I resolved the problem by extending the class com.filemaker.jdbc.Driver and overriding the connect method, so that it adds the sockettimeout parameter to the connection properties. Having this param in place, the FM JDBC driver interrupts the connection if no data have been coming from the socket for the timeout period.
I've also filed an issue with filemaker: https://community.filemaker.com/message/798471

Spring boot metrics shows HikariCP connection creation count 1, when HikariCP debug log's connection total is 2

I use Spring-boot version 2.0.2 to make web application with default connection pool HikariCP.
HikariCP debug log shows collect connection size like 2, but spring boot metrics show connection creation is 1.
Did I misunderstand?
Thanks in advance.
application.yml is the below
spring:
datasource:
minimum-idle: 2
maximum-pool-size: 7
Log:
DEBUG 8936 --- [l-1 housekeeper] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - After cleanup stats (total=2, active=0, idle=2, waiting=0)
URL for metrics:http://localhost:8080/xxx/metrics/hikaricp.connections.creation
Response:
{
name: "hikaricp.connections.creation",
measurements:
[
{
statistic: "COUNT",
value: 1 <--- I think this should be 2
},
...
]
}
What you are seeing is HikariCPs failfast check behaviour with regards to tracking metrics at this stage.
(I actually dug into this as I didn't know the answer beforehand)
At this stage a MetricsTracker isn't set yet and thus the initial connection creation isn't counted. In case the initial connection could be established, HikariCP just keeps this connection. In your case only the next connection creation is counted.
In case you really want the metric value to be "correct" you can set spring.datasource.hikari.initialization-fail-timeout=-1. The behaviour is described in HikariCPs README under initializationFailTimeout.
If you really need a "correct" value is debatable as you'll only miss that initial count. Ideally you'll want to reason about connection creation counts in a specific time window - e.g. rate of connection creations per minute to determine if you dispose connections too early from the pool.

AMQP Channel shutdown but Consumer not always restart

I have a frequent Channel shutdown: connection error issues (under 24.133.241:5671 thread, name is truncated) in RabbitMQ Java client (my producer and consumer are far apart). Most of the time consumer is automatically restarted as I have enabled heartbeat (15 seconds). However, there were some instances only Channel shutdown: connection error but no Consumer raised exception and no Restarting Consumer (under cTaskExecutor-4 thread).
My current workaround is to restart my application. Anyone can shed some light on this matter?
2017-03-20 12:42:38.856 ERROR 24245 --- [24.133.241:5671] o.s.a.r.c.CachingConnectionFactory
: Channel shutdown: connection error
2017-03-20 12:42:39.642 WARN 24245 --- [cTaskExecutor-4] o.s.a.r.l.SimpleMessageListenerCont
ainer : Consumer raised exception, processing can restart if the connection factory supports
it
...
2017-03-20 12:42:39.642 INFO 24245 --- [cTaskExecutor-4] o.s.a.r.l.SimpleMessageListenerCont
ainer : Restarting Consumer: tags=[{amq.ctag-4CqrRsUP8plDpLQdNcOjDw=21-05060179}], channel=Ca
ched Rabbit Channel: AMQChannel(amqp://21-05060179#10.24.133.241:5671/,1), conn: Proxy#7ec317
54 Shared Rabbit Connection: SimpleConnection#44bac9ec [delegate=amqp://21-05060179#10.24.133
.241:5671/], acknowledgeMode=NONE local queue size=0
Generally, this is due to the consumer thread being "stuck" in user code somewhere, so it can't react to the broken connection.
If you have network issues, perhaps it's stuck reading or writing to a socket; make sure you have timeouts set for any I/O operations.
Next time it happens take a thread dump to see what the consumer threads are doing.

Query on ThreadSafeClientConnManager (Apache HttpClient 4.1.1)

Am using ThreadSafeClientConnManager
Apache HttpComponenets-Client4.1.1 for my connection pool.
When am releasing the connection back to the pool i say:
cm.releaseConnection(client,-1,TimeUnit.SECONDS);
cm.closeExpiredConnections(); cm.closeIdleConnections(20,
TimeUnit.SECONDS);
[Here cm is object of ThreadSafeClientConnManager]
and as mentioned in the javadoc releaseConnection(ManagedClientConnection conn, long validDuration,TimeUnit timeUnit) am setting valid duration to -ve (<=0) value.
But when i see server logs i find that:
org.apache.http.impl.conn.DefaultClientConnection] Connection shut down
2011-08-17 14:12:48.992 DEBUG Other Thread-257 org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager Released connection is not reusable.
2011-08-17 14:12:48.992 DEBUG Other Thread-257 org.apache.http.impl.conn.tsccm.ConnPoolByRoute Releasing connection [HttpRoute[{}->http://server-name:port][null]
2011-08-17 14:12:48.992 DEBUG Other Thread-257 [org.apache.http.impl.conn.tsccm.ConnPoolByRoute] Notifying no-one, there are no waiting threads
2011-08-17 14:12:48.993 DEBUG Other Thread-257 [org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager] Closing expired connections
2011-08-17 14:12:48.993 DEBUG Other Thread-257 [shaded.org.apache.http.impl.conn.tsccm.ConnPoolByRoute] Closing expired connections
2011-08-17 14:12:48.993 DEBUG Other Thread-257 [shaded.org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager] Closing connections idle longer than 20 SECONDS
Here i see in the logs that "Released connection is not reusable"
Does that mean that "-1 " is not making the connection reusable and connections are closed instead of returning to pool?
If so can any one please suggest how can i make it reusable.
Thanks in advance.
Per default HTTP connection released back to the manager are considered non-reusable. If a connection remains in a consistent state it should be marked as re-usable by using ManagedClientConnection#markReusable() prior to being released back to the manager.

Categories