Query on ThreadSafeClientConnManager (Apache HttpClient 4.1.1) - java

Am using ThreadSafeClientConnManager
Apache HttpComponenets-Client4.1.1 for my connection pool.
When am releasing the connection back to the pool i say:
cm.releaseConnection(client,-1,TimeUnit.SECONDS);
cm.closeExpiredConnections(); cm.closeIdleConnections(20,
TimeUnit.SECONDS);
[Here cm is object of ThreadSafeClientConnManager]
and as mentioned in the javadoc releaseConnection(ManagedClientConnection conn, long validDuration,TimeUnit timeUnit) am setting valid duration to -ve (<=0) value.
But when i see server logs i find that:
org.apache.http.impl.conn.DefaultClientConnection] Connection shut down
2011-08-17 14:12:48.992 DEBUG Other Thread-257 org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager Released connection is not reusable.
2011-08-17 14:12:48.992 DEBUG Other Thread-257 org.apache.http.impl.conn.tsccm.ConnPoolByRoute Releasing connection [HttpRoute[{}->http://server-name:port][null]
2011-08-17 14:12:48.992 DEBUG Other Thread-257 [org.apache.http.impl.conn.tsccm.ConnPoolByRoute] Notifying no-one, there are no waiting threads
2011-08-17 14:12:48.993 DEBUG Other Thread-257 [org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager] Closing expired connections
2011-08-17 14:12:48.993 DEBUG Other Thread-257 [shaded.org.apache.http.impl.conn.tsccm.ConnPoolByRoute] Closing expired connections
2011-08-17 14:12:48.993 DEBUG Other Thread-257 [shaded.org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager] Closing connections idle longer than 20 SECONDS
Here i see in the logs that "Released connection is not reusable"
Does that mean that "-1 " is not making the connection reusable and connections are closed instead of returning to pool?
If so can any one please suggest how can i make it reusable.
Thanks in advance.

Per default HTTP connection released back to the manager are considered non-reusable. If a connection remains in a consistent state it should be marked as re-usable by using ManagedClientConnection#markReusable() prior to being released back to the manager.

Related

Hikari pool connection issue avoid db access

We have microservice based system where one dedicated service does all db related calls (db-reader) to MySQL.
There are open circuit errors in other service to db-reader service time to time.
We found there are Hikari pool connection closing/opening operations happened during this time period.
08:39:25.312
2022-03-28 08:39:25,311 [HikariPool-19 connection closer] DEBUG com.zaxxer.hikari.pool.PoolBase - HikariPool-19 - Closing connection com.mysql.cj.jdbc.ConnectionImpl#66fd15ad: (connection is evicted or dead)
2022-03-28 08:58:25,396 [HikariPool-19 connection closer] DEBUG com.zaxxer.hikari.pool.PoolBase - HikariPool-19 - Closing connection com.mysql.cj.jdbc.ConnectionImpl#413992c9: (connection has passed maxLifetime)
08:58:25.400
2022-03-28 08:58:25,399 [HikariPool-19 connection adder] DEBUG com.zaxxer.hikari.pool.HikariPool - HikariPool-19 - Added connection com.mysql.cj.jdbc.ConnectionImpl#759ad492
In db-reader service configurations we have :
hikariConfig.setConnectionTimeout(30000);
hikariConfig.setIdleTimeout(35000);
hikariConfig.setMaxLifetime(45000);
As log suggest, connections are closed due to maxLifetime, but why other services get open circuit when one connection is dead from connection pool? (connection pool size is 50)
Is there a way to avoid this happening?
try
setConnectionTimeout(15_000); //15sec
setIdleTimeout(60_000); //1min
setMaxLifetime(3_00_000); //5min
IdleTimeout can be zero OR atleast half of MaxLifetime.
also check timeout setting of framework/vpn/device/network/db that closes the connection earlier than your pool timeout setting.

reactor-netty TcpClient connection pool does not release connections

Using reactor-netty ConnectionProvider the connection is not released to the pool, so can not be reused, until disconnected
I found a workaround: disconnecting the connection after the response received, but it is terrible and can be appropriate only in my case when I connect to the neighbour server in the same network
clientsMap.computeIfAbsent("$host:$port") { hostPort ->
TcpClient.create(
ConnectionProvider.builder(hostPort)
.maxConnections(maxConnections)
.pendingAcquireTimeout(Duration.ofSeconds(4))
.maxIdleTime(Duration.ofSeconds(10))
.build()
)
.host(host)
.port(port)
.observe { _, state -> println("state: $state") }
//workaround as one packet is certainly less than 4096 bytes and the connection is stable enough
.option(ChannelOption.RCVBUF_ALLOCATOR, FixedRecvByteBufAllocator(4096))
}.connect().flatMap { connection ->
connection
.outbound()
.sendByteArray(Mono.just(send))
.then()
.thenMany(connection.inbound().receive().asByteArray())
.takeUntil { it.size < 4096 }
.map { it.toHexString() }
.collect(Collectors.joining())
.timeout(timeout)
//the next line causes disconnecting after the response being read
.doFinally { connection.channel().disconnect() }
}
when requesting data the output is
state: [connected]
state: [configured]
state: [disconnecting]
state: [connected]
state: [configured]
state: [disconnecting]
But this solution is much worse than returning connection to the pool and reusing it later
Expected to have a kind of connectionProvider.release(connection) method to forcibly return connection to the pool, and then I'd call it in doFinally, but there is nothing like that
if we comment the doFinally call, then state: [disconnecting] is not written and when more than maxConnections connections are acquired and not disconnected by the remote host yet, we catch
Pool#acquire(Duration) has been pending for more than the configured timeout of 4000ms
tested reactor-netty versions are 1.0.6 - 1.0.13
would appreciate solutions of returning connection to the pool and reusing it instead of disconnecting

How to solve "Socket read timed out" when using hikari connection pool

I am developing an application using play framework (version 2.8.0), java(version 1.8) with an oracle database(version 12C).
There is only zero or one hit to the database in a day, I am getting below error.
java.sql.SQLRecoverableException: IO Error: Socket read timed out
at oracle.jdbc.driver.T4CConnection.logoff(T4CConnection.java:919)
at oracle.jdbc.driver.PhysicalConnection.close(PhysicalConnection.java:2005)
at com.zaxxer.hikari.pool.PoolBase.quietlyCloseConnection(PoolBase.java:138)
at com.zaxxer.hikari.pool.HikariPool.lambda$closeConnection$1(HikariPool.java:447)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketTimeoutException: Socket read timed out
at oracle.net.nt.TimeoutSocketChannel.read(TimeoutSocketChannel.java:174)
at oracle.net.ns.NIOHeader.readHeaderBuffer(NIOHeader.java:82)
at oracle.net.ns.NIOPacket.readFromSocketChannel(NIOPacket.java:139)
at oracle.net.ns.NIOPacket.readFromSocketChannel(NIOPacket.java:101)
at oracle.net.ns.NIONSDataChannel.readDataFromSocketChannel(NIONSDataChannel.java:80)
at oracle.jdbc.driver.T4CMAREngineNIO.prepareForReading(T4CMAREngineNIO.java:98)
at oracle.jdbc.driver.T4CMAREngineNIO.unmarshalUB1(T4CMAREngineNIO.java:534)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:485)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:252)
at oracle.jdbc.driver.T4C7Ocommoncall.doOLOGOFF(T4C7Ocommoncall.java:62)
at oracle.jdbc.driver.T4CConnection.logoff(T4CConnection.java:908)
... 6 common frames omitted
db {
default {
driver=oracle.jdbc.OracleDriver
url="jdbc:oracle:thin:#XXX.XXX.XXX.XX:XXXX/XXXXXXX"
username="XXXXXXXXX"
password="XXXXXXXXX"
hikaricp {
dataSource {
cachePrepStmts = true
prepStmtCacheSize = 250
prepStmtCacheSqlLimit = 2048
}
}
}
}
It seems it is causing due to inactive database connection, How can I solve this?
Please let me know if any other information is required?
You can enable TCP keepalive for JDBC - either be setting directive or by adding "ENABLE=BROKEN" into connection string.
Usually Cisco/Juniper cuts off TCP connection when it is inactive for more that on hour.
While Linux kernel starts sending keepalive probes after two hours(tcp_keepalive_time). So if you decide to turn tcp keepalive on, you will also need root, to change this kernel tunable to lower value(10-15 minutes)
Moreover HikariCP should not keep open any connection for longer than 30 minutes - by default.
So if your FW, Linux kernel and HikariCP all use default settings, then this error should not occur in your system.
See HikariCP official documentation
maxLifetime:
This property controls the maximum lifetime of a connection in the
pool. An in-use connection will never be retired, only when it is
closed will it then be removed. On a connection-by-connection basis,
minor negative attenuation is applied to avoid mass-extinction in the
pool. We strongly recommend setting this value, and it should be
several seconds shorter than any database or infrastructure imposed
connection time limit. A value of 0 indicates no maximum lifetime
(infinite lifetime), subject of course to the idleTimeout setting. The
minimum allowed value is 30000ms (30 seconds). Default: 1800000 (30
minutes)
I have added the below configuration for hickaricp in configuration file and it is
working fine.
## Database Connection Pool
play.db.pool = hikaricp
play.db.prototype.hikaricp.connectionTimeout=120000
play.db.prototype.hikaricp.idleTimeout=15000
play.db.prototype.hikaricp.leakDetectionThreshold=120000
play.db.prototype.hikaricp.validationTimeout=10000
play.db.prototype.hikaricp.maxLifetime=120000

Force Hikari/Hibernate to close stale (leaked?) connections

I'm working with a FileMaker 16 datasource through the official JDBC driver in Spring Boot 2 with Hibernate 5.3 and Hikari 2.7.
The FileMaker server performance is poor, a SQL query execution time can reach a minute for big tables. Sometimes it results in connection leaking, when the connection pool is full of active connections which are never released.
The question is how to force active connections in the pool which have been hanging there say for two minutes to close, moving them to idle and making available for using again.
As an example, I'm accessing the FileMaker datasource through a RestController using the findAll method in org.springframework.data.repository.PagingAndSortingRepository:
#RestController
public class PatientController {
#Autowired
private PatientRepository repository;
#GetMapping("/patients")
public Page<Patient> find(Pageable pageable) {
return repository.findAll(pageable);
}
}
Calling /patients a few times in a raw causes connection leaking, here's what Hikari reports:
2018-09-20 13:49:00.939 DEBUG 1 --- [l-1 housekeeper]
com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Pool stats
(total=10, active=10, idle=0, waiting=2)
It also throws exceptions like this:
java.lang.Exception: Apparent connection leak detected
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:128) ~[HikariCP-2.7.9.jar!/:na]
What I need is if repository.findAll takes more than N seconds, the connection must be killed and the controller method must throw and exception. How to achieve it?
Here's my Hikari config:
allowPoolSuspension.............false
autoCommit......................true
catalog.........................none
connectionInitSql...............none
connectionTestQuery............."SELECT COUNT(*) FROM Clinics"
connectionTimeout...............30000
dataSource......................none
dataSourceClassName.............none
dataSourceJNDI..................none
dataSourceProperties............{password=<masked>}
driverClassName................."com.filemaker.jdbc.Driver"
healthCheckProperties...........{}
healthCheckRegistry.............none
idleTimeout.....................600000
initializationFailFast..........true
initializationFailTimeout.......1
isolateInternalQueries..........false
jdbc4ConnectionTest.............false
jdbcUrl.........................jdbc:filemaker://***:2399/ec_data
leakDetectionThreshold..........90000
maxLifetime.....................1800000
maximumPoolSize.................10
metricRegistry..................none
metricsTrackerFactory...........none
minimumIdle.....................10
password........................<masked>
poolName........................"HikariPool-1"
readOnly........................false
registerMbeans..................false
scheduledExecutor...............none
scheduledExecutorService........internal
schema..........................none
threadFactory...................internal
transactionIsolation............default
username........................"CHC"
validationTimeout...............5000
HikariCP focuses on just connection pool management to managing the connections that it has formed from it.
loginTimeout - how long HikariCP will wait for a connection to be formed to the database (basically a JDBC connection)
spring.datasource.hikari.connectionTimeout=30000
maxLifetime - how long a connection will live in the pool before being closed
spring.datasource.hikari.maxLifetime=1800000
idleTimeout - how long an unused connection lives in the pool
spring.datasource.hikari.idleTimeout=30000
Use javax.persistence.query.timeout to cancel the request if it takes longer than defined timeout.
javax.persistence.query.timeout (Long – milliseconds)
The javax.persistence.query.timeout hint defines how long a query is
allowed to run before it gets canceled. Hibernate doesn’t handle this
timeout itself but provides it to the JDBC driver via the JDBC
Statement.setTimeout method.
The filemaker JDBC driver ignores the javax.persistence.query.timeout parameter, even though the timeout value is set in the driver's implementation of the java.sql.setQueryTimeout setter. So I resolved the problem by extending the class com.filemaker.jdbc.Driver and overriding the connect method, so that it adds the sockettimeout parameter to the connection properties. Having this param in place, the FM JDBC driver interrupts the connection if no data have been coming from the socket for the timeout period.
I've also filed an issue with filemaker: https://community.filemaker.com/message/798471

AMQP Channel shutdown but Consumer not always restart

I have a frequent Channel shutdown: connection error issues (under 24.133.241:5671 thread, name is truncated) in RabbitMQ Java client (my producer and consumer are far apart). Most of the time consumer is automatically restarted as I have enabled heartbeat (15 seconds). However, there were some instances only Channel shutdown: connection error but no Consumer raised exception and no Restarting Consumer (under cTaskExecutor-4 thread).
My current workaround is to restart my application. Anyone can shed some light on this matter?
2017-03-20 12:42:38.856 ERROR 24245 --- [24.133.241:5671] o.s.a.r.c.CachingConnectionFactory
: Channel shutdown: connection error
2017-03-20 12:42:39.642 WARN 24245 --- [cTaskExecutor-4] o.s.a.r.l.SimpleMessageListenerCont
ainer : Consumer raised exception, processing can restart if the connection factory supports
it
...
2017-03-20 12:42:39.642 INFO 24245 --- [cTaskExecutor-4] o.s.a.r.l.SimpleMessageListenerCont
ainer : Restarting Consumer: tags=[{amq.ctag-4CqrRsUP8plDpLQdNcOjDw=21-05060179}], channel=Ca
ched Rabbit Channel: AMQChannel(amqp://21-05060179#10.24.133.241:5671/,1), conn: Proxy#7ec317
54 Shared Rabbit Connection: SimpleConnection#44bac9ec [delegate=amqp://21-05060179#10.24.133
.241:5671/], acknowledgeMode=NONE local queue size=0
Generally, this is due to the consumer thread being "stuck" in user code somewhere, so it can't react to the broken connection.
If you have network issues, perhaps it's stuck reading or writing to a socket; make sure you have timeouts set for any I/O operations.
Next time it happens take a thread dump to see what the consumer threads are doing.

Categories