This question already has answers here:
Apparent connection leak detected with Hikari CP
(2 answers)
Closed last month.
In my Spring boot application,
datasource:
oracle:
hikari:
jdbc-url: URL
username: Username
password: password
minimum-idle: 3
maximum-pool-size: 10
max-lifetime: 0
connection-test-query: select 1 from dual
leak-detection-threshold: 10000
driver-class-name: oracle.jdbc.OracleDriver
idleTimeout: 0
keepaliveTime: 300000
With this config, I'm getting error:
com.zaxxer.hikari.pool.ProxyLeakTask : Connection leak detection triggered for oracle.jdbc.driver.T4CConnection#48063c71 on thread taskExecutor-1, stack trace follows
java.lang.Exception: Apparent connection leak detected
Previously reported leaked connection oracle.jdbc.driver.T4CConnection#48063c71 on thread taskExecutor-1 was returned to the pool (unleaked)
The memory usage in AppMetrix has jumped to 90% which is extremely high. Now, if I run any other job, the issue will be Heap out of memory.
Any suggestions on how this can be fixed?
I think it is due to a low value of hikari.leakDetectionThreshold property (10000 ms i.e 10 seconds). Try increasing it to 1 minute (60000 ms).
Related
I have a spring boot app that read/writes to postgres database. I use jooq and hikariCP to manage the database connections. My apps is connected to a patroni cluster consisting of two Postgresql 14.5 instances - one is the master and the other one is a read-only replica.
When the service is processing data and I trigger a failover in the database -
killing the leader, choosing new leader, then changing the old leader to a replica - I start getting exceptions like
with\norg.jooq.exception.DataAccessException: SQL [delete from \"public\".\"my_table\" where \"public\".\"my_table\".\"username\" = ?]; ERROR: cannot execute DELETE in a read-only transaction
it looks like a hikariCP/jdbc driver issue where it is still using the connections to the old-master-now-replica instead of evicting them and creating new connections to the new leader.
How to solve it ?
My configuration looks like this:
org.jooq:jooq:3.16.10
org.postgresql:postgresql:42.5.0
org.jooq:jooq-postgres-extensions:3.16.10
com.zaxxer:HikariCP:4.0.3
spring:
main:
allow-bean-definition-overriding: true
banner-mode: "off"
jooq:
sql-dialect: Postgres
jpa:
open-in-view: false
datasource:
type: com.zaxxer.hikari.HikariDataSource
driver-class-name: org.postgresql.Driver
url: "jdbc:postgresql://my-db-cluster:5432/my-database?tcpKeepAlive=true&ApplicationName=my-app"
username: ${DATASOURCE_USERNAME}
password: ${DATASOURCE_PASSWORD}
hikari:
minimumIdle: 0
maximumPoolSize: 10
auto-commit: false
autoconfigure:
exclude: org.springframework.boot.autoconfigure.r2dbc.R2dbcAutoConfiguration
I found a solution by adding &targetServerType=primary to jdbc url as docs states: https://jdbc.postgresql.org/documentation/use/
I have defined three transaction in which select operations and SELECT operations are happening on the different parameter passed . I try to invoke this method concurrently . I am get an error:
o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 0, SQLState: null
Aug 25, 2020 # 12:16:39.000 2020-08-25 06:46:39.388 ERROR 1 --- [o-9003-exec-630] o.h.engine.jdbc.spi.SqlExceptionHelper : Hikari - Connection is not available, request timed out after 60000ms.
And sometimes
org.postgresql.util.PSQLException: FATAL: remaining connection slots are reserved for non-replication superuser connections
I am new to java. Please guide me to solve this issue. Do I need to write multithreading to access number of resources or configuration issue?
hikari:
poolName: Hikari
autoCommit: false
minimumIdle: 5
connectionTimeout: 60000
maximumPoolSize: 80
idleTimeout: 60000
maxLifetime: 240000
leakDetectionThreshold: 300000
Multiple Threads read to the same table in database by using the same connection in java?
This is generally speaking not going to work. The JDBC API types Connection, Statement, ResultSet and so on are not generally thread-safe1. You should not try to use on instance in multiple threads.
If you want to avoid having multiple connections open the normal approach is to use a JDBC connection pool to manage the connections. When a thread needs to talk to the database, it gets a connection from the pool. When it has finished talking to the database, it releases it back to the pool.
In the PostgreSQL / Hikari case:
For PostgreSQL - "Using the driver in a multi-threaded or a servlet environment"
For Hikari - the getConnection() call is thread-safe, but I couldn't find anything that explicitly talked about the thread-safety of the connection object when shared by multiple threads.
1 - I have seen it stated that a spec compliant JDBC driver should be thread-safe, but I could not see where the JDBC spec actually requires this to be so. But even assuming that it does say that somewhere, the threads sharing a connection would need to coordinate very carefully to avoid things like one thread causing another thread's resultset to "spontaneously" close.
I am developing an application using play framework (version 2.8.0), java(version 1.8) with an oracle database(version 12C).
There is only zero or one hit to the database in a day, I am getting below error.
java.sql.SQLRecoverableException: IO Error: Socket read timed out
at oracle.jdbc.driver.T4CConnection.logoff(T4CConnection.java:919)
at oracle.jdbc.driver.PhysicalConnection.close(PhysicalConnection.java:2005)
at com.zaxxer.hikari.pool.PoolBase.quietlyCloseConnection(PoolBase.java:138)
at com.zaxxer.hikari.pool.HikariPool.lambda$closeConnection$1(HikariPool.java:447)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketTimeoutException: Socket read timed out
at oracle.net.nt.TimeoutSocketChannel.read(TimeoutSocketChannel.java:174)
at oracle.net.ns.NIOHeader.readHeaderBuffer(NIOHeader.java:82)
at oracle.net.ns.NIOPacket.readFromSocketChannel(NIOPacket.java:139)
at oracle.net.ns.NIOPacket.readFromSocketChannel(NIOPacket.java:101)
at oracle.net.ns.NIONSDataChannel.readDataFromSocketChannel(NIONSDataChannel.java:80)
at oracle.jdbc.driver.T4CMAREngineNIO.prepareForReading(T4CMAREngineNIO.java:98)
at oracle.jdbc.driver.T4CMAREngineNIO.unmarshalUB1(T4CMAREngineNIO.java:534)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:485)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:252)
at oracle.jdbc.driver.T4C7Ocommoncall.doOLOGOFF(T4C7Ocommoncall.java:62)
at oracle.jdbc.driver.T4CConnection.logoff(T4CConnection.java:908)
... 6 common frames omitted
db {
default {
driver=oracle.jdbc.OracleDriver
url="jdbc:oracle:thin:#XXX.XXX.XXX.XX:XXXX/XXXXXXX"
username="XXXXXXXXX"
password="XXXXXXXXX"
hikaricp {
dataSource {
cachePrepStmts = true
prepStmtCacheSize = 250
prepStmtCacheSqlLimit = 2048
}
}
}
}
It seems it is causing due to inactive database connection, How can I solve this?
Please let me know if any other information is required?
You can enable TCP keepalive for JDBC - either be setting directive or by adding "ENABLE=BROKEN" into connection string.
Usually Cisco/Juniper cuts off TCP connection when it is inactive for more that on hour.
While Linux kernel starts sending keepalive probes after two hours(tcp_keepalive_time). So if you decide to turn tcp keepalive on, you will also need root, to change this kernel tunable to lower value(10-15 minutes)
Moreover HikariCP should not keep open any connection for longer than 30 minutes - by default.
So if your FW, Linux kernel and HikariCP all use default settings, then this error should not occur in your system.
See HikariCP official documentation
maxLifetime:
This property controls the maximum lifetime of a connection in the
pool. An in-use connection will never be retired, only when it is
closed will it then be removed. On a connection-by-connection basis,
minor negative attenuation is applied to avoid mass-extinction in the
pool. We strongly recommend setting this value, and it should be
several seconds shorter than any database or infrastructure imposed
connection time limit. A value of 0 indicates no maximum lifetime
(infinite lifetime), subject of course to the idleTimeout setting. The
minimum allowed value is 30000ms (30 seconds). Default: 1800000 (30
minutes)
I have added the below configuration for hickaricp in configuration file and it is
working fine.
## Database Connection Pool
play.db.pool = hikaricp
play.db.prototype.hikaricp.connectionTimeout=120000
play.db.prototype.hikaricp.idleTimeout=15000
play.db.prototype.hikaricp.leakDetectionThreshold=120000
play.db.prototype.hikaricp.validationTimeout=10000
play.db.prototype.hikaricp.maxLifetime=120000
I use Spring-boot version 2.0.2 to make web application with default connection pool HikariCP.
HikariCP debug log shows collect connection size like 2, but spring boot metrics show connection creation is 1.
Did I misunderstand?
Thanks in advance.
application.yml is the below
spring:
datasource:
minimum-idle: 2
maximum-pool-size: 7
Log:
DEBUG 8936 --- [l-1 housekeeper] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - After cleanup stats (total=2, active=0, idle=2, waiting=0)
URL for metrics:http://localhost:8080/xxx/metrics/hikaricp.connections.creation
Response:
{
name: "hikaricp.connections.creation",
measurements:
[
{
statistic: "COUNT",
value: 1 <--- I think this should be 2
},
...
]
}
What you are seeing is HikariCPs failfast check behaviour with regards to tracking metrics at this stage.
(I actually dug into this as I didn't know the answer beforehand)
At this stage a MetricsTracker isn't set yet and thus the initial connection creation isn't counted. In case the initial connection could be established, HikariCP just keeps this connection. In your case only the next connection creation is counted.
In case you really want the metric value to be "correct" you can set spring.datasource.hikari.initialization-fail-timeout=-1. The behaviour is described in HikariCPs README under initializationFailTimeout.
If you really need a "correct" value is debatable as you'll only miss that initial count. Ideally you'll want to reason about connection creation counts in a specific time window - e.g. rate of connection creations per minute to determine if you dispose connections too early from the pool.
Im building a app that with spring boot that uses mysql as db, as im seen is that spring boot opens 10 connections with the db but uses only one.
Every time i run show processlist on db, 9 connections are sleep and only one is doing something.
There's a way to split all the process between the 10 opened connections?
My app need better mysql processing because every minute about 300 records is inserted, so i think spliting between this opened connections will get better results.
My aplication.yml:
security:
basic:
enabled: false
server:
context-path: /invest/
compression:
enabled: true
mime-types:
- application/json,application/xml,text/html,text/xml,text/plain
spring:
jackson:
default-property-inclusion: non-null
serialization:
write-bigdecimal-as-plain: true
deserialization:
read-enums-using-to-string: true
datasource:
platform: MYSQL
url: jdbc:mysql://localhost:3306/invest?useSSL=true
username: #
password: #
separator: $$
dbcp2:
test-while-idle: true
validation-query: SELECT 1
jpa:
show-sql: false
hibernate:
ddl-auto: update
naming:
strategy: org.hibernate.cfg.ImprovedNamingStrategy
properties:
hibernate:
dialect: org.hibernate.dialect.MySQL5Dialect
http:
encoding:
charset: UTF-8
enabled: true
force: true
There's a way to do this?
You can check my.ini,[mysqld] max_connections ,make sure more connections are allowed by your mysql.
You can adjust these config in your application.yml
spring.datasource.max-active
spring.datasource.max-age
spring.datasource.max-idle
spring.datasource.max-lifetime
spring.datasource.max-open-prepared-statements
spring.datasource.max-wait
spring.datasource.maximum-pool-size
spring.datasource.min-evictable-idle-time-millis
spring.datasource.min-idle
After all, I do not think this is a problem . Mysql can handle 1000+ insertion per second. 300 per minute is too few for pool to open more than one connection.