We're using the latest versions of spring-data-r2dbc, r2dbc-pool and r2dbc-postgresql for connecting to a PostgreSQL database using a connection pool. We noticed some high response times, much higher than the query response times taken from the database itself (query_store.qs_view)
We added a metricsRecorder to the pool and, for debugging purposes, we're only printing when each method is invoked. It seems that before each SQL query, we get as many recordDestroyLatency invocations as there are connections in the pool and the same number of recordAllocationSuccessAndLatency invocations. We assumed that this means that each connection gets closed and reopened before each query. We then compared with the database logs and it proves this is true: there is the same number of could not receive data from client: An existing connection was forcibly closed by the remote host followed by connection received: messages.
Why would this happen? Below is the code we're using for creating the connection factory.
#Configuration
open class DatabaseConfiguration : AbstractR2dbcConfiguration() {
//some variable initialisations
#Bean
override fun connectionFactory(): ConnectionFactory {
val cf = PostgresqlConnectionFactory(
PostgresqlConnectionConfiguration.builder()
.host(hostname)
.database(dbName)
.schema(dbSchema)
.username(dbUsername)
.password(dbPassword)
.build()
)
val cp = ConnectionPoolConfiguration.builder(cf)
.initialSize(poolInitialSize)
.maxSize(poolMaxSize)
.metricsRecorder(DatabaseMetricsRecorder())
.build()
return ConnectionPool(cp)
}
}
As mentioned, the DatabaseMetricsRecorder just prints each operation. For the query itself, we're extending the ReactiveCrudRepository interface. The ConnectionPoolConfiguration is in its simplest form here, we tried adding parameters like maxIdleTime or validationQuery (as we'll have for production) but it doesn't seem to help.
It's a known bug in R2DBC pool, here's the issue. As a workaround, maxLifeTime should be explicitly set, for example I set it to the maximum allowed value in milliseconds (otherwise, if set to a value greater than the maximum allowed value in milliseconds, R2DBC will throw an Exception):
val cp = ConnectionPoolConfiguration.builder(cf)
.initialSize(poolInitialSize)
.maxSize(poolMaxSize)
.maxLifeTime(Duration.ofMillis(Long.MAX_VALUE))
.metricsRecorder(DatabaseMetricsRecorder())
.build()
Related
Vertx outlines that this is the normal way to connect to a database here https://vertx.io/docs/vertx-jdbc-client/java/ :
String databaseFile = "sqlite.db";
JDBCPool pool = JDBCPool.pool(
this.context.getVertx(),
new JDBCConnectOptions()
.setJdbcUrl("jdbc:sqlite:".concat(databaseFile)),
new PoolOptions()
.setMaxSize(1)
.setConnectionTimeout(CONNECTION_TIMEOUT)
);
This application I am writing has interprocess communication, so I want to use WAL mode, and synchronous=NORMAL to avoid heavy disk usage. The WAL pragma (PRAGMA journal_model=WAL) is set to the database itself, so I dont need to worry about it on application startup. However, the synchronous pragma is set per connection, so I need to set that when the appplication starts. Currently that looks like this:
// await this future
pool
.preparedQuery("PRAGMA synchronous=NORMAL")
.execute()
I can confirm that later on the synchronous pragma is set on the database connection.
pool
.preparedQuery("PRAGMA synchronous")
.execute()
.map(rows -> {
for (Row row : rows) {
System.out.println("pragma synchronous is " + row.getInteger("synchronous"))
}
})
and since I enforce a single connection in the pool, this should be fine. However I cant help but feel that there is a better way of doing this.
As a side note, I chose a single connection because sqlite is synchronous in nature, there is only ever one write happening at a time to the database. Creating write contention within a single application sounds detrimental rather than helpful, and I have designed my application to have as little concurrent writes within a single process as possible, though inter-process concurrency is real.
So these arent definitive answers, but I have tried a few other options, and want to outline them here.
For instance, vertx can instantiate a SQLClient without a pool:
JsonObject config = new JsonObject()
.put("url", "jdbc:sqlite:"+databaseFile)
.put("driver_class", "org.sqlite.jdbcDriver")
.put("max_pool_size", 1);
Vertx vertx = Vertx.vertx();
SQLClient client = JDBCClient.create(vertx, config);
though this still uses a connection pool, so I have to make the same adjustments to set a single connection in the pool, so that the pragma sticks.
There is also a SQLiteConfig class from the sqlite library, but I have no idea how to connect that into the vertx jdbc wrappers
org.sqlite.SQLiteConfig config = new org.sqlite.SQLiteConfig();
config.setSynchronous(SynchronousMode.NORMAL);
is a pool required with vertx? I did try running the sqlite jdbc driver directly, without a vertx wrapper. But this ran into all kinds of SQLITE_BUSY exceptions.
There is a requirement in our project to support 'jdbc timeout' feature for Postgres (Postgresql driver).
we also support Microsoft SQL (JTDS driver) and MySQl (mysql driver). So i want to introduce the 'loginTimeout' as a common feature for all the Databases.
While going through documentation of the drivers, i found there is a jdbc parameter called 'loginTimeout' supported from both JTDS and Postgresql drivers but not for Msql
http://jtds.sourceforge.net/faq.html
loginTimeout (default - 0 for TCP/IP connections or 20 for named pipe connections) The amount of time to wait (in seconds) for a successful
connection before timing out. If a TCP/IP connection is used to
connect to the database and Java 1.4 or newer is being used, the
loginTimeout parameter is used to set the initial connection timeout
when initially opening a new socket. A value of zero (the default)
causes the connection to wait indefinitely, e.g.,until a connection is
established or an error occurs. See also socketTimeout. If a named
pipe connection is used (namedPipe is true) and loginTimeout is
greater than zero, the value of loginTimeout is used for the length of
the retry period when "All pipe instances are busy" error messages are
received while attempting to connect to the server. If loginTimeout is
zero (the default), a value of 20 seconds is used for the named pipe
retry period.
http://jdbc.postgresql.org/documentation/84/connect.html
loginTimeout = int Specify how long to wait for establishment of a
database connection. The timeout is specified in seconds.
but for Mysql there is nothing like loginTimeout, but has
connectTimeout: Timeout for socket connect (in milliseconds), with 0
being no timeout. Only works on JDK-1.4 or newer. Defaults to '0'
So my question is "what is the difference between connectTimeout and loginTimeout" , do they do the same functionality ?
MySQL's socketConnect setting determines how long the client will try to attempt to open a network connection; it makes no claims that the database itself will authenticate or function, only that a socket can be established.
Postgres, by contrast, is indicating a maximum amount of time to connect to the database; if your command returns before the timeout, then the database received the network request and responded in an appropriate amount of time.
So in one situation, you are only putting constraints on network behavior (i.e. how long you wait to connect a socket to that server at that port); in the other, you are putting constraints on database behavior (i.e. how long to wait for the database itself to connect).
As a side note, the interface javax.sql.CommonDataSource dictates a JDBC property for getLoginTimeout which mimics the behavior of PostgreSQL's property. When you look at various implementations of a MysqlDataSource(mariadb, for example), these methods are not implemented; this generally leads me to believe that there is no direct analogue in MySQL.
I run some tomcat application, use jndi connection pool.
In some time connection pool stops to give connections and application hangs.
Seems because some code receives connection and doesn't return it back to the pool.
How can I monitor - which code does it ?
More common - I want to see what all connections do at the moment.
I cannot change application. But I can adjust Tomcat, maybe add some interceptors.
Most connection pool implementations can be configured to detect connections that are not returned to the pool. E.g. for Tomcat's JDBC connection pool there are various configurations options for "abandoned connections" (connections for which the lease expired). If you search for "Abandoned" on this web-page, you'll find the options:
removeAbandoned
removeAbandonedTimeout
logAbandoned
suspectTimeout
As mentioned on the web-page, these settings will add a little overhead but at least your application will not hang. When testing your application, set a low value for removeAbandonedTimeout and a low value for maxActive so that you can catch unreturned connections early.
I never use the connection pool API itself, I always wrap it in a helper.
That way, I can do this in the helper:
private Exception created = (0 == 1) ? new Exception() : null;
When I run into problems like yours, I just change one character (0 -> 1) to have a stack trace of who created this instance in my debugger.
The database connection is get like below
public Connection getDBConection(){
Context context = new InitialContext();
DataSource dataSource = (javax.sql.DataSource) context.lookup("java:myDataSource");
Connection conn = dataSource.getConnection();
}
For a userA, is each database request should call getDBConnection() once; but no need to control all request use the same connection?
That is, if userA has three database request, then userA should call getDBConnection() three times, and call Connection.closed() after used in each request?
If the userA call getDBConnection() three times (that is, call dataSource.getConnection() three times), is three connection created? Or it is unknown and controlled by weblogic?
I feel very chaos, is it true that there should be one new connection for one database request? or just call DataSource.getConnection() for each database request and the number of new connection created is controlled by web server, no need to think how many connection is actually created.
Every time you call DataSource.getConnection, the data source will retrieve a connection for you. It should be true that the returned connection is not being actively used by anyone else, but it is not necessarily a brand-new connection.
For example, if you use a connection pool, which is a very common practice, then when you call Connection.close, the connection is not actually closed, but instead returns to a pool of available connections. Then, when you call DataSource.getConnection, the connection pool will see if it has any spare connections lying around that it hasn't already handed out. If so, it will typically test that they haven't gone stale (usually by executing a very quick query against a dummy table). If not, it will return the existing connection to the caller. But if the connection is stale, then the connection pool will retrieve a truly new connection from the underlying database driver, and return that instead.
Typically, connection pools have a maximum number of real connections that they will keep at any one time (say, 50). If your application tries to request more than 50 simultaneous connections, DataSource.getConnection will throw an exception. Or in some implementations, it will block for a while until one becomes available, and then throw an exception after that time expires. For a sample implementation, have a look at Apache Commons DBCP.
Hopefully that answers your question!
Does anybody know a way to reestablish/retry again hibernate connection. I mean for example: the remote DB is down and I start my application. Hibernate is not able to establish a connection. it fails. But the application is not closed. Is there a way to say hibernate to try one more time to establish a connecton?
Thanks in advance
You should really go for C3P0 connection pooling: http://www.mchange.com/projects/c3p0/index.html#hibernate-specific
There is a section in C3P0 documentation on that subject: http://www.mchange.com/projects/c3p0/index.html#configuring_recovery
First you have to properly configure c3p0, which in case of using hibernate must happen in c3p0.properties file.
In your c3p0.properties put these properties to retry to reconnect indefinitely every 3 seconds when database is down:
c3p0.acquireRetryAttempts = 0
c3p0.acquireRetryDelay = 3000
c3p0.breakAfterAcquireFailure = false
Also, to avoid broken connections lying in your pool indefinitely, use connection age management:
c3p0.maxConnectionAge = 6000
c3p0.maxIdleTime = 6000
c3p0.maxIdleTimeExcessConnections = 1800
c3p0.idleConnectionTestPeriod = 3600
These may be quite expensive, but helpful if above is not enough:
c3p0.testConnectionOnCheckout = true
c3p0.preferredTestQuery = SELECT 1;
You may also want to check for connection leaks which prevent recovery:
c3p0.debugUnreturnedConnectionStackTraces = true
And finally, make sure that C3P0 is hooked with hibernate correctly, enable debug logging for "com.mchange" package and see if C3P0 tells you anything about itself. It should state configuration properties which are loaded, so see if it's all there.
I hope this helps.
C3P0 is the internal connection-pool implementation for hibernate.
Add "hibernate.connection.provider_class = org.hibernate.connection.C3P0ConnectionProvider"
in hibernate properties file. Create a file c3p0.properties setting the parameters accordingly. This file & c3p0-x.jar must be in classpath.
c3p0.properties
c3p0.idleConnectionTestPeriod : If this is a number greater than 0, c3p0 will test all idle, pooled but unchecked-out connections, every this number of seconds.
c3p0.testConnectionOnCheckout : Use only if necessary. Expensive. If true, an operation will be performed at every connection checkout to verify that the connection is valid. Better choice: verify connections periodically using idleConnectionTestPeriod.
There are several other properties that can be configured in hibernate.properties & c3p0.properties.
May be, you try to call method .getCurrentSession() instead .openSession()?
If connection falls you must establish new one.
I hope this helps.