Weird behaviour of connectTimeout in mysql Spring Boot - java

I want to implement Database connection timeout in my spring boot application so that whenever establishing connection to the Database takes more than a certain time, it should throw a timeout exception and it should not wait indefinetely. I am using mysql as database. In my JDBC URL I have mentioned the parameter "connectTimeout" like this:
datasource.jdbc-url=jdbc:mysql://XX.XX.XX.XX/dbName?connectTimeout=5000
In the official docs, its mentioned that the unit is in milliseconds. Now I have measured the database connection time in my local to be around 1400ms.
stopwatch.start();
datasource.getConnection();
stopwatch.stop();
So ideally, setting connectTimeout to be less than 1000 should throw timeout exception. But it does not throw any error even after giving a value as low as 10.
Only when I mention the value to be 1, then it throws me timeout exception.
datasource.jdbc-url=jdbc:mysql://XX.XX.XX.XX/dbName?connectTimeout=1
Which gives me the notion that maybe the timeout is being considered in seconds? Am I doing anything wrong here?
Mysql Connector Verion: 5.1.46
Docs Link: https://dev.mysql.com/doc/connector-j/8.0/en/connector-j-connp-props-networking.html

Related

R2DBC pool closes all connections before each query

We're using the latest versions of spring-data-r2dbc, r2dbc-pool and r2dbc-postgresql for connecting to a PostgreSQL database using a connection pool. We noticed some high response times, much higher than the query response times taken from the database itself (query_store.qs_view)
We added a metricsRecorder to the pool and, for debugging purposes, we're only printing when each method is invoked. It seems that before each SQL query, we get as many recordDestroyLatency invocations as there are connections in the pool and the same number of recordAllocationSuccessAndLatency invocations. We assumed that this means that each connection gets closed and reopened before each query. We then compared with the database logs and it proves this is true: there is the same number of could not receive data from client: An existing connection was forcibly closed by the remote host followed by connection received: messages.
Why would this happen? Below is the code we're using for creating the connection factory.
#Configuration
open class DatabaseConfiguration : AbstractR2dbcConfiguration() {
//some variable initialisations
#Bean
override fun connectionFactory(): ConnectionFactory {
val cf = PostgresqlConnectionFactory(
PostgresqlConnectionConfiguration.builder()
.host(hostname)
.database(dbName)
.schema(dbSchema)
.username(dbUsername)
.password(dbPassword)
.build()
)
val cp = ConnectionPoolConfiguration.builder(cf)
.initialSize(poolInitialSize)
.maxSize(poolMaxSize)
.metricsRecorder(DatabaseMetricsRecorder())
.build()
return ConnectionPool(cp)
}
}
As mentioned, the DatabaseMetricsRecorder just prints each operation. For the query itself, we're extending the ReactiveCrudRepository interface. The ConnectionPoolConfiguration is in its simplest form here, we tried adding parameters like maxIdleTime or validationQuery (as we'll have for production) but it doesn't seem to help.
It's a known bug in R2DBC pool, here's the issue. As a workaround, maxLifeTime should be explicitly set, for example I set it to the maximum allowed value in milliseconds (otherwise, if set to a value greater than the maximum allowed value in milliseconds, R2DBC will throw an Exception):
val cp = ConnectionPoolConfiguration.builder(cf)
.initialSize(poolInitialSize)
.maxSize(poolMaxSize)
.maxLifeTime(Duration.ofMillis(Long.MAX_VALUE))
.metricsRecorder(DatabaseMetricsRecorder())
.build()

UCP query timeout property on DataSource level

We are using hibernate3 jar and JDK 6. And for connection pool we are using UCP-11.2.0.3. Now we are facing connection pool full issue. We have already set Abandon limit. We want to implement query timeout on UCP. Is this can be handle at DataSource level to have query timeout. I can see function datasource.setConnectionProperty(name, value) but didn't find property for query timeout.
The properties you may set for the UCP are defined in the documentation
You may set Time-To-Live Connection Timeout which will cap the total time the connection is borrowed from the pool.
pds.setTimeToLiveConnectionTimeout(18000)
The query timeout can be set on the statement level and is valid only for this statement - see here - so this is not configured via the UCP.
stmt.setQueryTimeout(timeout)

Understanding JDBC Timeout Variables and replication

I am using JDBC driver to connect to mySql from my java code (read client).
Driver = com.mysql.jdbc.Driver
JdbcUrl = jdbc:mysql://<<IpOftheDb>>/<<DbSchema Name>>?autoReconnect=true&connectTimeout=5000&socketTimeout=10000
In case the database is down ( machine hosting the db is up but the mysqld process is not running) , it takes some time to get the exception , The exception is
"com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException:Could not create connection to database server. Attempted reconnect 3 times. Giving up."
In the statement above socketTimeout is 10 sec . Now if I bring up the db with 10 sec as SocketTimeout I get the response correctly.
But If i reduce it to one sec and am executing the query I get the same exception.
"com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException:Could not create connection to database server. Attempted reconnect 3 times. Giving up."
But connectTimeout doesnt change anything. Can someone explain me what socketTimeout and connectTimeout means.
Also , If we are setting up replication and specifying the 2nd database as failover i.e.
my connection string changes to
jdbc:mysql://<<PrimaryDbIP>>,<<SecondaryDbIp>>/<<DbSchema>>?useTimezone=true
&serverTimezone=UTC&useLegacyDatetimeCode=false
&failOverReadOnly=false&autoReconnect=true&maxReconnects=3
&initialTimeout=5000&connectTimeout=6000&socketTimeout=6000
&queriesBeforeRetryMaster=50&secondsBeforeRetryMaster=30
I see that if primary is down then I get the response from secondary (failover Db) .
Now when client executes a query , does it go to primary database and then wait for socketTimeout (or whatever) and then goes to Secondary or it goes to Seconday before timeout occurs.
Moreover, the second time when the same connection Object is used , does it go directly to the secondary or again the above process is repeated .
I tried find some documentation which explains this but couldnt get .
Hopefully , someone can help here explaining the various timeout parameters and their usefulness.

jdbc connectTimeout vs jdbc loginTimeout

There is a requirement in our project to support 'jdbc timeout' feature for Postgres (Postgresql driver).
we also support Microsoft SQL (JTDS driver) and MySQl (mysql driver). So i want to introduce the 'loginTimeout' as a common feature for all the Databases.
While going through documentation of the drivers, i found there is a jdbc parameter called 'loginTimeout' supported from both JTDS and Postgresql drivers but not for Msql
http://jtds.sourceforge.net/faq.html
loginTimeout (default - 0 for TCP/IP connections or 20 for named pipe connections) The amount of time to wait (in seconds) for a successful
connection before timing out. If a TCP/IP connection is used to
connect to the database and Java 1.4 or newer is being used, the
loginTimeout parameter is used to set the initial connection timeout
when initially opening a new socket. A value of zero (the default)
causes the connection to wait indefinitely, e.g.,until a connection is
established or an error occurs. See also socketTimeout. If a named
pipe connection is used (namedPipe is true) and loginTimeout is
greater than zero, the value of loginTimeout is used for the length of
the retry period when "All pipe instances are busy" error messages are
received while attempting to connect to the server. If loginTimeout is
zero (the default), a value of 20 seconds is used for the named pipe
retry period.
http://jdbc.postgresql.org/documentation/84/connect.html
loginTimeout = int Specify how long to wait for establishment of a
database connection. The timeout is specified in seconds.
but for Mysql there is nothing like loginTimeout, but has
connectTimeout: Timeout for socket connect (in milliseconds), with 0
being no timeout. Only works on JDK-1.4 or newer. Defaults to '0'
So my question is "what is the difference between connectTimeout and loginTimeout" , do they do the same functionality ?
MySQL's socketConnect setting determines how long the client will try to attempt to open a network connection; it makes no claims that the database itself will authenticate or function, only that a socket can be established.
Postgres, by contrast, is indicating a maximum amount of time to connect to the database; if your command returns before the timeout, then the database received the network request and responded in an appropriate amount of time.
So in one situation, you are only putting constraints on network behavior (i.e. how long you wait to connect a socket to that server at that port); in the other, you are putting constraints on database behavior (i.e. how long to wait for the database itself to connect).
As a side note, the interface javax.sql.CommonDataSource dictates a JDBC property for getLoginTimeout which mimics the behavior of PostgreSQL's property. When you look at various implementations of a MysqlDataSource(mariadb, for example), these methods are not implemented; this generally leads me to believe that there is no direct analogue in MySQL.

java.sql.Exception ClosedConnection

I am getting the following error:
java.sql.SQLException: Closed
Connection at
oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
at
oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:146)
at
oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:208)
at
oracle.jdbc.driver.PhysicalConnection.getMetaData(PhysicalConnection.java:1508)
at
com.ibatis.sqlmap.engine.execution.SqlExecutor.moveToNextResultsSafely(SqlExecutor.java:348)
at
com.ibatis.sqlmap.engine.execution.SqlExecutor.handleMultipleResults(SqlExecutor.java:320)
at
com.ibatis.sqlmap.engine.execution.SqlExecutor.executeQueryProcedure(SqlExecutor.java:277)
at
com.ibatis.sqlmap.engine.mapping.statement.ProcedureStatement.sqlExecuteQuery(ProcedureStatement.java:34)
at
com.ibatis.sqlmap.engine.mapping.statement.GeneralStatement.executeQueryWithCallback(GeneralStatement.java:173)
at
com.ibatis.sqlmap.engine.mapping.statement.GeneralStatement.executeQueryForList(GeneralStatement.java:123)
at
com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForList(SqlMapExecutorDelegate.java:614)
at
com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForList(SqlMapExecutorDelegate.java:588)
at
com.ibatis.sqlmap.engine.impl.SqlMapSessionImpl.queryForList(SqlMapSessionImpl.java:118)
at
org.springframework.orm.ibatis.SqlMapClientTemplate$3.doInSqlMapClient(SqlMapClientTemplate.java:268)
at
org.springframework.orm.ibatis.SqlMapClientTemplate.execute(SqlMapClientTemplate.java:193)
at
org.springframework.orm.ibatis.SqlMapClientTemplate.executeWithListResult(SqlMapClientTemplate.java:219)
at
org.springframework.orm.ibatis.SqlMapClientTemplate.queryForList(SqlMapClientTemplate.java:266)
at
gov.hud.pih.eiv.web.authentication.AuthenticationUserDAO.isPihUserDAO(AuthenticationUserDAO.java:24)
at
gov.hud.pih.eiv.web.authorization.AuthorizationProxy.isAuthorized(AuthorizationProxy.java:125)
at
gov.hud.pih.eiv.web.authorization.AuthorizationFilter.doFilter(AuthorizationFilter.java:224)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:246)
at
I am really stumped and can't figure out what could be causing this error. I am not able to reproduce the error on my machine but on production it is coming a lot of times. I am using iBatis in the whole application so there are no chances of my code not closing connections.
We do have stored procedures that run for a long time before they return results (around 15 seconds).
does anyone have any ideas on what could be causing this? I dont think raising the # of connections on the application server will fix this issue buecause if connections were running out then we'd see "Error on allocating connections"
Sample code snippet:
this.setSqlMapClientTemplate(getSqlTempl());
getSqlMapClientTemplate().queryForList("authentication.isUserDAO", parmMap);
this.setSqlMapClientTemplate(getSqlTemplDW());
List results = (List) parmMap.get("Result0");
I am using validate in my connection pool.
Based on the stack trace, the likely cause is that you are continuing to use a ResultSet after close() was called on the Connection that generated the ResultSet.
What is your DataSource framework? Apache Commons DBCP?
do you use poolPrepareStatement property in data source configuration?
Check the following:
Make sure testOnBorrow and testOnReturn are true and place a simple validationQuery like select 0 from dual.
Do you use au
do you use autoCommit? Are you using START TRANSACTION, COMMIT in your stored procedures? After several days of debugging we found out that you can't mix transaction management both in Java and in SQL - you have to decide on one place to do it. Where are you doing yours?
Edit your question with answers to this, an we'll continue from there.
When a db server reboots, or there are some problems with a network, all the connections in the connection pool are broken and this usuall requires a reboot of application server
And if broken connection detected, you shold create a new one to replace it in connection pool. It's common problem called deadly connections.

Categories