Connection pool size with postgres r2dbc-pool - java

I'm not able to open more than 10 connections with spring-webflux and r2dbc (with r2dbc-pool driver 0.8.0.M8). My config looks like:
#Configuration
public class PostgresConfig extends AbstractR2dbcConfiguration {
#Override
#Bean
public ConnectionFactory connectionFactory() {
ConnectionFactory connectionFactory = ConnectionFactories.get(ConnectionFactoryOptions.builder()
.option(DRIVER, "pool")
.option(PROTOCOL, "postgresql")
.option(HOST, host)
.option(USER, user)
.option(PASSWORD, password)
.option(DATABASE, database)
.build());
ConnectionPoolConfiguration configuration = ConnectionPoolConfiguration.builder(connectionFactory)
.maxIdleTime(Duration.ofMinutes(30))
.initialSize(initialSize)
.maxSize(maxSize)
.maxCreateConnectionTime(Duration.ofSeconds(1))
.build();
return new ConnectionPool(configuration);
}
}
When I'm specifying more than 10 connections I get errors like:
org.springframework.dao.DataAccessResourceFailureException:
Failed to obtain R2DBC Connection; nested exception is
java.util.concurrent.TimeoutException:
Did not observe any item or terminal signal within 1000ms in 'lift'
(and no fallback has been configured)
at org.springframework.data.r2dbc.connectionfactory.ConnectionFactoryUtils
.lambda$getConnection$0(ConnectionFactoryUtils.java:71)
Moreover, number of connections remain the same as initial size. New connections are not created.

Spring boot (at least 2.3.4) have a tricky "gotcha" regarding the pool size when set by properties/yaml. If you include "pool" in your database url, then the size set (initial size or max size) won't have any effect and the defaults for the r2dbc pool will be used, 10 and 10.
This is due to PooledConnectionFactoryCondition in ConnectionFactoryConfigurations.java failing when both spring.r2dbc.pool.enabled=true, which it is if the r2dbc-pool dependency is on the classpath, and "pool" being part of the spring.r2dbc.url property.
From the PooledConnectionFactoryCondition docs:
Condition that checks that a ConnectionPool is requested. The
condition matches if pooling was opt-in via configuration and the r2dbc url does not contain pooling-related options.
This in turn does lead to the ConnectionPool bean not being created.
Skip the "pool" keyword in the r2dbc url property and have the r2dbc-pool dependency, then you will get a correctly configured pool.

Ok, the MAX_SIZE should be also specified for ConnectionFactoryOptions. Otherwise connection pool size still remains 10.
import static io.r2dbc.pool.PoolingConnectionFactoryProvider.MAX_SIZE;
ConnectionFactory connectionFactory = ConnectionFactories.get(ConnectionFactoryOptions.builder()
.option(DRIVER, "pool")
.option(PROTOCOL, "postgresql")
.option(HOST, host)
.option(USER, user)
.option(PASSWORD, password)
.option(DATABASE, database)
.option(MAX_SIZE, maxSize)
.build());

Note that, you can use the release version 0.8.4.RELEASE (which is the latest) https://mvnrepository.com/artifact/io.r2dbc/r2dbc-postgresql/0.8.4.RELEASE which does not require you to instantiate a ConnectionFactory

If you are using spring-boot-starter-data-r2dbc, then the min and max is configurable in application.properties
spring.r2dbc.pool.initialSize=2
spring.r2dbc.pool.maxSize=2
See org.springframework.boot.autoconfigure.r2dbc.R2dbcProperties class

Below is my configuration for spring-boot-starter-data-r2dbc Check if that helps you:
spring:
r2dbc:
url: r2dbc:postgresql://127.0.0.1:5432/test?schema=public
username: postgres
password: postgres
pool:
name: TEST-POOL
initial-size: 1
max-size: 10
max-idle-time: 30m

Related

How to prevent CommunicationsException?

I am currently working with an app which using two different DB(different instance).
DB A is totally under A's project, however DB B is under the other project.(I am managing these via gcloud app engine.)
What is my problem :
DB B always disconnected if no request more than few hours with below error message.
{"timestamp":1555464776769,"status":500,"error":"Internal Server Error","exception":"org.springframework.transaction.CannotCreateTransactionException","message":"Could not open JPA EntityManager for transaction; nested exception is javax.persistence.PersistenceException: com.mysql.cj.jdbc.exceptions.CommunicationsException: The last packet successfully received from the server was 43,738,243 milliseconds ago. The last packet sent successfully to the server was 43,738,243 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.","path":"/client/getAllCompany"}
To resolve this issue, I tried.
1) add 'autoReconnect=true' at application.properties
api.datasource.url = jdbc:mysql://google/projectB?cloudSqlInstance=projectB:australia-southeast1:projectB&socketFactory=com.google.cloud.sql.mysql.SocketFactory&useSSL=false&autoReconnect=true
2) add below config at application.properties.
spring.datasource.tomcat.test-while-idle=true
spring.datasource.tomcat.time-between-eviction-runs-millis=3600000
spring.datasource.tomcat.min-evictable-idle-time-millis=7200000
spring.datasource.tomcat.test-on-borrow=true
spring.datasource.tomcat.validation-query=SELECT 1
(My project doesn't have web.xml file)
If i re-deploy this project, i can access data from DB B as well.
How can I config to prevent killed the connectivity with DB B?
Wish to listen advice. Thank you in advanced.
HibernateConfig Code for DB B
#Bean(name = "apiDataSource")
#ConfigurationProperties(prefix = "api.datasource")
public DataSource dataSource() {
return DataSourceBuilder.create().build();
}
#Bean(name = "apiEntityManagerFactory")
public LocalContainerEntityManagerFactoryBean apiEntityManagerFactory(
EntityManagerFactoryBuilder builder, #Qualifier("apiDataSource") DataSource dataSource
) {
return builder.dataSource(dataSource).packages("com.workspez.api.entity").persistenceUnit("api").build();
}
#Bean(name = "apiTransactionManager")
public PlatformTransactionManager apiTransactionManager(
#Qualifier("apiEntityManagerFactory") EntityManagerFactory apiEntityManagerFactory
) {
return new JpaTransactionManager(apiEntityManagerFactory);
}
#Bean(name = "apiJdbc")
public NamedParameterJdbcTemplate apiJdbcTemplate() {
return new NamedParameterJdbcTemplate(dataSource());
}

How to restrict initial pool size in hikaricp?

I used to have a tomcat connection pool configuration restricting the initial pool size: spring.datasource.tomcat.initial-size=2
Now switching to hikaricp: what is the equivalent to restrict the initially started connections?
Sidenote: spring.datasource.hikari.minimumIdle does not prevent initializing 10 connections at startup.
You can use these properties provided in spring boot:
spring.datasource.hikari.minimumIdle=5
spring.datasource.hikari.maximumPoolSize=8
and then:
spring.datasource.hikari.idleTimeout=120000
to limit the life of idle connections, but hikari doesn't give you such property for initial number of connections.
With spring boot, set these properties in your application.properties.
spring.jpa.hibernate.hikari.minimumIdle=5
spring.datasource.hikari.maximum-pool-size=10
I just found out it had to do with my configuration of multiple datasources.
In general, the property spring.datasource.hikari.minimum-idle=2 automatically restricts the startup pool size correctly!
But if having multiple data sources, there was a configuration property missing, as follows:
#Bean
#ConfigurationProperties("spring.datasource.secondary.hikari")
public DataSource secondatyDataSource() {
return ...
}
Before I just had "spring.datasource.secondary", and there by my property "spring.datasource.secondary.hikari.*" was not taken into account.
This is probably wrong documented in
https://docs.spring.io/spring-boot/docs/current/reference/html/howto-data-access.html

Cannot undeploy from Tomcat due to specific Spring JMS configuration

I have used ActiveMQ as JMS implementation (activemq-spring 5.12.1) and Spring JMS integration (spring-jms 4.2.3.RELEASE), all wrapped in Spring Boot web application, being deployed on Tomcat.
I have following Spring configuration (code reduced for the verbosity of code sample):
#Configuration
#EnableJms
public class AppConfiguration {
#Bean
public XAConnectionFactory jmsXaConnection(String activeMqUsername, String activeMqPassword) {
ActiveMQXAConnectionFactory activeMQXAConnectionFactory = new ActiveMQXAConnectionFactory(activeMqUsername, activeMqPassword, activeMqUrl);
ActiveMQPrefetchPolicy prefetchPolicy = new ActiveMQPrefetchPolicy();
prefetchPolicy.setAll(0);
activeMQXAConnectionFactory.setPrefetchPolicy(prefetchPolicy);
return activeMQXAConnectionFactory;
}
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory(ConnectionFactory connectionFactory, JtaTransactionManager jtaTransactionManager) {
DefaultJmsListenerContainerFactory containerFactory = new DefaultJmsListenerContainerFactory();
containerFactory.setConnectionFactory(connectionFactory);
containerFactory.setTransactionManager(jtaTransactionManager);
containerFactory.setSessionTransacted(true);
containerFactory.setTaskExecutor(Executors.newFixedThreadPool(2));
containerFactory.setConcurrency("2-2");
containerFactory.setCacheLevel(DefaultMessageListenerContainer.CACHE_CONSUMER);
return containerFactory;
}
}
My target was to configure two consumers (hence concurrecny set to 2-2) and to prevent any messages caching (hence prefetch policy set to 0).
It works, but causes very unpleasent side effect:
When I try to undeploy the application via Tomcat Manager, it hangs for a while and then indefinitely, every second produces following DEBUG message:
"DefaultMessageListenerContainer:563 - Still waiting for shutdown of 2 Message listener invokers".
Therefore, I am forced to kill Tomcat process every time. What have I done wrong?
One of my lucky shots (documentation both ActiveMQ and Spring JMS was not that helpful), was to set prefetch policy to 1 instead of 0. Then it undeploys gracefully, but I cannot see how it can relate.
Also I am curious, why having cache level set to CACHE_CONSUMER is required for the ActiveMQ to create two consumers. When default setting was left (CACHE_NONE while using external transaction manager), only one consumer was created (while concurrency was still set two 2-2, and so was TaskExecutor).
If it matters, for connection factory and transaction manager, Atomikos is used. I can paste its configuration also, but it seems irrelevant.
Most likely this means the consumer threads are "stuck" in user code; take a thread dump with jstack to see what the container threads are doing.

How to disable connection pooling in Hibernate

I have a web application that currently uses c3p0 and Hibernate to connect to a Firebird 1.5 database.
I am facing a problem from time to time where the database just stops responding, even trying to manually restart the service doesn't have any effect, and it doesn't generate any logs, so I have to manually reboot the machine to get it working again.
I think that maybe Firebird hangs when the pool tries to acquire a specific number of connections or something like that. So, I need to test my app without connection pooling, to check if this is or is not the problem.
I can't simply remove c3p0 configs from persistence because this way Hibernate would use its own integrated connection pool. So how to do it?
The most flexible solution is to use an explicit DataSource, instead of configuring the connection pooling through Hibernate. One option to configure a non-pooling DataSource is by using DriverManagerDataSource:
#Override
protected Properties getProperties() {
Properties properties = new Properties();
properties.put("hibernate.dialect", "org.hibernate.dialect.HSQLDialect");
//log settings
properties.put("hibernate.hbm2ddl.auto", "update");
//data source settings
properties.put("hibernate.connection.datasource", newDataSource());
return properties;
}
protected ProxyDataSource newDataSource() {
DriverManagerDataSource actualDataSource = new DriverManagerDataSource();
actualDataSource.setUrl("jdbc:hsqldb:mem:test");
actualDataSource.setUsername("sa");
actualDataSource.setPassword("");
ProxyDataSource proxyDataSource = new ProxyDataSource();
proxyDataSource.setDataSource(actualDataSource);
proxyDataSource.setListener(new SLF4JQueryLoggingListener());
return proxyDataSource;
}
This way you can choose a pooling or a non-pooling DataSource.
To get a better understanding of you connection pooling resources usage, you can configure FlexyPool to collect metrics for:
concurrent connections
concurrent connection requests
data source connection acquiring time
connection lease time
maximum pool size
total connection acquiring time
overflow pool size
retries attempts
I found documentation for hibernate 3.3 and 4.3 that says:
Just replace the hibernate.connection.pool_size property with
connection pool specific settings. This will turn off Hibernate's
internal pool.
Hibernate will use its org.hibernate.connection.C3P0ConnectionProvider
for connection pooling if you set hibernate.c3p0.* properties
So remove hibernate.connection.pool_size and any hibernate.c3p0... properties from configuration, than connection pooling is disabled.
Adding to Vlad's answer:
If somebody still faces this:
Be sure to remove "hibernate-c3p0" from your classpath, if exists, since this will automatically enable MChange c3p0 connection pool.
Another option that, you can close the connection manually when closing the entity manager:
....
SessionImpl ses = (SessionImpl) session;
close(ses.connection());
try {
session.close();
} catch (Exception e) {
logger.error(e);
}
........
Note: the above manual closing will work with the default pool of hibernate, not hibernate default one.
Good Luck

Dynamically select catalog for Tomcat mysql connection pool in a Spring application

I need to create a connection pool from a spring application running in a tomcat server.
This application has many catalogs, the main catalog (its is static) called 'db' has just one table with all existing catalog names and a boolean flag for the "active" one.
When the application starts I need to choose from the main catalogs the active one, then I have to select it as default catalog.
How can I accomplish this?
Until now I used a custom class DataSourceSelector extends DriverManagerDataSource but now I need to improve the db connection using a pool, then I thought about a tomcat dbcp pool.
I would suggest the following steps:
Extend BasicDataSourceFactory to produce customized BasicDataSources.
Those customized BasicDataSources would already know which catalog is active and have the defaultCatalog property set accordingly.
Use your extended BasicDataSourceFactory in the Tomcat configuration.
#Configuration
public class DataAccessConfiguration {
#Bean(destroyMethod = "close")
public javax.sql.DataSource dataSource() {
org.apache.tomcat.jdbc.pool.DataSource ds = new org.apache.tomcat.jdbc.pool.DataSource();
ds.setDriverClassName("com.mysql.jdbc.Driver");
ds.setUrl("jdbc:mysql://localhost/db");
ds.setUsername("javauser");
ds.setPassword("");
ds.setInitialSize(5);
ds.setMaxActive(10);
ds.setMaxIdle(5);
ds.setMinIdle(2);
ds.get
return ds;
}
}

Categories