Environment
Hikari CP Version : 3.4.1
JDK version: 1.8.0_251
Database: Azure SQL
SpringBoot Version: 2.2.2 RELEASE
MS-SQL JDBC Driver version:- 8.4.1-jre8
I am working on a Spring Boot app where I have a requirement for configuring the auto failover of database and we are leveraging Azure Failover Groups. The application is connected to the primary database and when the manual failover of the primary server is done, the application should connect to secondary server which is now the new primary.
Below is my JDBC connection string and Hikari properties:
logging.level.org.springframework.jdbc.core=DEBUG
spring.datasource.url=jdbc:sqlserver://<FailoverGroupname>.database.windows.net:1433;database=<DatabaseName>;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;
spring.datasource.username=demo
spring.datasource.password=***
spring.datasource.initialization-mode=always
hibernate.hikari.minimumIdle=0
hibernate.hikari.maxPoolSize=1
hibernate.hikari.autoCommit=false
hibernate.hikari.initializationFailTimeout=3000
hibernate.hikari.connectionTestQuery=SELECT 1
When the application is started, below is the analysis:
Hikari has a valid connection in the pool
Spring JPA Transaction pulls the connection from the pool
Data is getting persisted in database successfully
next, Manual Failover is done
txn.begin(), throws exception
Broken Pipe Write Failed Exception
Connection is closed (SQLServerException: The connection is closed)
Connection remains closed till the application is running
Expectation:
Since pool has come in bad state, the connections after getting closed should be recovered and connect to the new primary database that serves as backup
Does anyone knows how can I re-establish the closed connections to auto reconnect to backup database.
Related
I have an app that uses spring-boot,jpa-hiberanate with mysql.I am getting this error log
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 56,006,037 milliseconds ago. The last packet sent successfully to the server was 56,006,037 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
Here is my application.properties
# DataSource settings: set here configurations for the database connection
spring.datasource.url = jdbc:mysql://localhost:3306/test
spring.datasource.username = test
spring.datasource.password = test
spring.datasource.driverClassName = com.mysql.jdbc.Driver
# Specify the DBMS
spring.jpa.database = MYSQL
# Show or not log for each sql query
spring.jpa.show-sql = true
# Hibernate settings are prefixed with spring.jpa.hibernate.*
spring.jpa.hibernate.ddl-auto = update
spring.jpa.hibernate.dialect = org.hibernate.dialect.MySQL5Dialect
spring.jpa.hibernate.naming_strategy = org.hibernate.cfg.ImprovedNamingStrategy
To solve this issue I can use
spring.datasource.testOnBorrow=true
spring.datasource.validationQuery=SELECT 1
But I checked that it's not recommended .So can anyone suggest me what should I do to overcome this error
The easiest way is to specify the autoReconnect property in the JDBC url, although this isn't the recommended approach.
spring.datasource.url = jdbc:mysql://localhost:3306/test?autoReconnect=true
This can give issues when you have an active connection and during a transaction something happens and a reconnect is going to happen. It will not give issues when the connection is validated at the start of the transaction and a new connection is acquired at the start.
However it is probably better to enable validation of your connections during the lifetime of your application. For this you can specify several properties.
First start by specifying maximum number of connections you allow for the pool. (For a read on determining the max poolsize read this).
spring.datasource.max-active=10
You also might want to specify the number of initial connections
spring.datasource.initial-size=5
Next you want to specify the min and max number of idle connections.
spring.datasource.max-idle=5
spring.datasource.min-idle=1
To validate connection you need to specify a validation-query and when to validate. As you want to validate periodically, instead of when a connection is retrieved from the pool (this to prevent broken connections in your pool).
spring.datasource.test-while-idle=true
spring.datasource.test-on-borrow=true
spring.datasource.validation-query=SELECT 1
NOTE: The usage of a validation-query is actually discouraged with as JDBC4 has a better/different way of doing connection validation. HikariCP will automatically call the JDBC validation method when available.
Now that you are also validating while a connection is idle you need to specify how often you want to run this query for the connections and when a connection is considered idle.
spring.datasource.time-between-eviction-runs-millis=5000 (this is the default)
spring.datasource.min-evictable-idle-time-millis=60000 (this is also default)
This all should trigger validation of your (idle) connections and when an exception occurs or the idle period has passed your connections will be removed from the pool.
Assuming you are using Tomcat JDBC as the connection pool this is a nice read of what and how to configure.
UPDATE: Spring Boot 2.x switched the default connection pool to HikariCP instead of Tomcat JDBC.
I want to test pulling data from Apache HBase with a Java application. The application will use SQL-like queries via a JDBC to Apache Phoenix.
I've set up my Hadoop "cluster" on one machine using Ambari and the HortonWorks HDP 2.5 platform. I've also Kerberized the environment using Ambari's wizard, where my KDC is a seperate machine running Windows Active Directory.
Ambari shows no errors, and I am able to use sqlline.py to successfully make SQL-like calls to HBase through Phoenix. I set up some example tables this way (cf. HortonWorks Phoenix & ODBC tutorial, although I had to kinit etc. first).
However, I am having problems creating a JDBC datasource to be used by the Java application. In my case, I am planning to host the webapp on WildFly 10.1 and I am developing with Eclipse JEE with the JBoss Tools plugin.
These are the steps I used to create the datasource:
Datasource Explorer > Database Connections > New...
Connection Profile: Generic JDBC
URL: jdbc:phoenix:hdfs.eaa.local:2181/hbase-secure:HTTP/hbase.eaa.local#EAA.LOCAL:jboss.server.temp.dir/spnego.service.keytab
Username: hbase -I'm unsure what to put here-
Driver: I've created a new driver of the type "Generic JDBC Driver" and I had to add JAR files for all of the dependencies of phoenix-core-[version].jar. The Driver Class is org.apache.phoenix.jbdc.PhoenixDriver.
I got the connection string from an extant post in the HortonWorks community, which is why it includes the Kerberos principal and keytab used for the connection.
When I try to test the datasource connection, it churns for about 5 minutes before spitting out an error message (after something like 35 attempts). The client returns Java exceptions that the sockets are in a "closing state", and the Zookeeper logs are less helpful:
INFO [SyncThread:0:ZooKeeperServer#617] - Established session 0x157aef451560217 with negotiated timeout 40000 for client /192.168.40.3:52674
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.40.41:43860
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#827] - Processing ruok command from /192.168.40.41:43860
INFO [Thread-1448:NIOServerCnxn#1007] - Closed socket connection for client /192.168.40.41:43860 (no session established for client)
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.40.41:43922
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#868] - Client attempting to establish new session at /192.168.40.41:43922
INFO [SyncThread:0:ZooKeeperServer#617] - Established session 0x157aef451560218 with negotiated timeout 40000 for client /192.168.40.41:43922
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:SaslServerCallbackHandler#118] - Successfully authenticated client: authenticationID=hbase/hdfs.eaa.local#EAA.LOCAL; authorizationID=hbase/hdfs.eaa.local#EAA.LOCAL.
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:SaslServerCallbackHandler#134] - Setting authorizedID: hbase
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#964] - adding SASL authorization for authorizationID: hbase
INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor#494] - Processed session termination for sessionid: 0x157aef451560218
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1007] - Closed socket connection for client /192.168.40.41:43922 which had sessionid 0x157aef451560218
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.40.41:44008
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#827] - Processing ruok command from /192.168.40.41:44008
INFO [Thread-1449:NIOServerCnxn#1007] - Closed socket connection for client /192.168.40.41:44008 (no session established for client)
NB. 192.168.40.3 is the VPN server, which my host machine is using to tunnel into the environment with the Hadoop cluster. 192.168.40.41 is the machine running the cluster, hdfs.eaa.local.
There are plenty of accepted socket connections which are then immediately closed. Occasionally the client authenticates successfully (so I'm confident in my Kerberos settings) but then there is a session termination immediately afterward.
I've also tried to deploy the Datasource directly in WildFly with jboss-cli and standalone.xml and module.xml modifications. But I get lots of problems with missing dependencies that I'm not sure how to resolve without creating a new module for each required JAR (and there are a lot) for phoenix-core-[version].jar. I followed this guide.
What can I do to fix the issue or diagnose further? I've been pulling my hair out for a couple of days now.
You need to add hbase-site.xml and core-site.xml to your classpath.
See How to connect to a Kerberos-secured Apache Phoenix data source with WildFly? for more information.
I have an app that uses spring-boot,jpa-hiberanate with mysql.I am getting this error log
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 56,006,037 milliseconds ago. The last packet sent successfully to the server was 56,006,037 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
Here is my application.properties
# DataSource settings: set here configurations for the database connection
spring.datasource.url = jdbc:mysql://localhost:3306/test
spring.datasource.username = test
spring.datasource.password = test
spring.datasource.driverClassName = com.mysql.jdbc.Driver
# Specify the DBMS
spring.jpa.database = MYSQL
# Show or not log for each sql query
spring.jpa.show-sql = true
# Hibernate settings are prefixed with spring.jpa.hibernate.*
spring.jpa.hibernate.ddl-auto = update
spring.jpa.hibernate.dialect = org.hibernate.dialect.MySQL5Dialect
spring.jpa.hibernate.naming_strategy = org.hibernate.cfg.ImprovedNamingStrategy
To solve this issue I can use
spring.datasource.testOnBorrow=true
spring.datasource.validationQuery=SELECT 1
But I checked that it's not recommended .So can anyone suggest me what should I do to overcome this error
The easiest way is to specify the autoReconnect property in the JDBC url, although this isn't the recommended approach.
spring.datasource.url = jdbc:mysql://localhost:3306/test?autoReconnect=true
This can give issues when you have an active connection and during a transaction something happens and a reconnect is going to happen. It will not give issues when the connection is validated at the start of the transaction and a new connection is acquired at the start.
However it is probably better to enable validation of your connections during the lifetime of your application. For this you can specify several properties.
First start by specifying maximum number of connections you allow for the pool. (For a read on determining the max poolsize read this).
spring.datasource.max-active=10
You also might want to specify the number of initial connections
spring.datasource.initial-size=5
Next you want to specify the min and max number of idle connections.
spring.datasource.max-idle=5
spring.datasource.min-idle=1
To validate connection you need to specify a validation-query and when to validate. As you want to validate periodically, instead of when a connection is retrieved from the pool (this to prevent broken connections in your pool).
spring.datasource.test-while-idle=true
spring.datasource.test-on-borrow=true
spring.datasource.validation-query=SELECT 1
NOTE: The usage of a validation-query is actually discouraged with as JDBC4 has a better/different way of doing connection validation. HikariCP will automatically call the JDBC validation method when available.
Now that you are also validating while a connection is idle you need to specify how often you want to run this query for the connections and when a connection is considered idle.
spring.datasource.time-between-eviction-runs-millis=5000 (this is the default)
spring.datasource.min-evictable-idle-time-millis=60000 (this is also default)
This all should trigger validation of your (idle) connections and when an exception occurs or the idle period has passed your connections will be removed from the pool.
Assuming you are using Tomcat JDBC as the connection pool this is a nice read of what and how to configure.
UPDATE: Spring Boot 2.x switched the default connection pool to HikariCP instead of Tomcat JDBC.
In a nutshell, when I try and get a connection after not having used a transaction for several minutes the first transaction setup fails.
When things are working, my logs show the following for a simple transaction:
DEBUG: org.springframework.transaction.annotation.AnnotationTransactionAttributeSource - Adding transactional method 'getRecord' with attribute: PROPAGATION_REQUIRED,ISOLATION_DEFAULT,timeout_30; ''
DEBUG: org.springframework.jdbc.datasource.DataSourceTransactionManager - Creating new transaction with name [com.example.services.Service.getRecord]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT,timeout_30; ''
DEBUG: org.springframework.jdbc.datasource.DataSourceTransactionManager - Acquired Connection [jdbc:mysql://dev-db.example.com:3306/example, UserName=foo#1.2.3.4, MySQL Connector Java] for JDBC transaction
DEBUG: org.springframework.jdbc.datasource.DataSourceTransactionManager - Switching JDBC Connection [jdbc:mysql://dev-db.example.com:3306/example, UserName=foo#1.2.3.4, MySQL Connector Java] to manual commit
However, if I haven't had any activity for several minutes, instead I will get this message:
DEBUG: org.springframework.transaction.annotation.AnnotationTransactionAttributeSource - Adding transactional method 'getRecord' with attribute: PROPAGATION_REQUIRED,ISOLATION_DEFAULT,timeout_30; ''
DEBUG: org.springframework.jdbc.datasource.DataSourceTransactionManager - Creating new transaction with name [com.example.services.Service.getRecord]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT,timeout_30; ''
My observations so far:
seems to be based on inactivity, but I've seen this behavior immediately after restarting my Tomcat although nothing else was hitting the database so I think it's inactivity against a network element such as my MySQL server.
when my application starts up, it makes a few non-transactional requests from the database which have not had any problems, so it seems related to transactions.
the timeout element in the #Transactional notation is not effective in this case. It seems to eventually time out, but takes 15 minutes (!).
while this transaction request is busy timing out, I can make subsequent requests successfully.
doesn't seem to be a starved local connection pool. I have seen this right after restarting the Tomcat.
When it finally times out (did I mention 15 minutes!) I get the following:
DEBUG: org.springframework.jdbc.datasource.DataSourceTransactionManager - Acquired Connection [org.apache.commons.dbcp.PoolableConnection#3269c671] for JDBC transaction
DEBUG: org.springframework.jdbc.datasource.DataSourceTransactionManager - Switching JDBC Connection [connection is closed] to manual commit
DEBUG: org.springframework.jdbc.datasource.DataSourceUtils - Returning JDBC Connection to DataSource
DEBUG: org.springframework.jdbc.datasource.DataSourceUtils - Could not close JDBC Connection
ERROR: java.sql.SQLException: Already closed.
ERROR: org.springframework.transaction.CannotCreateTransactionException: Could not open JDBC Connection for transaction; nested exception is com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No operations allowed after connection closed.
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No operations allowed after connection closed.
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 1,312,604 milliseconds ago. The last packet sent successfully to the server was 924,748 milliseconds ago.
Caused by: java.net.SocketException: Connection timed out
Running Spring 3.1.1, mysql 5.1.32, commons-dbcp 1.4 and commons-pool 1.5.4.
Does anyone know what this is?
Your problem is that MySQL server timeouts idle JDBC connections. This has nothing to do with the TransactionManager set-up.
Have a look at your DataSource set-up. It shall tests connection on connection retrieval and/or validate idle connections in pool.
In commons-dbcp you can set-up test on connection retrieval via the testOnBorrow and validationQuery properties.
I get a communication link failure while application tries to establish a connection with DB.
[#|2010-04-08T20:09:57.825+0300|SEVERE|glassfish3.0|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=24;_ThreadName=Thread-1;|Cannot connect to database server = com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.|#]
Precisely at this string:
Statement s = conn.createStatement();
where conn is defined as follows:
private static java.sql.Connection conn;
For this app I have set a connection pool with default parameters and currently it (app) uses both JPA and direct JDBC queries. Recreation of connection pool gave nothing, connection pool ping gave next message:
Ping Connection Pool for pool is Failed. Ping failed Exce
ption - Connection could not be allocated because: Communications lin
k failure%%%EOL%%%%%%EOL%%%The last packet sent successfully to the s
erver was 0 milliseconds ago. The driver has not received any packets
from the server. Please check the server.log for more details.%%%EOL
%%%Ping failed Exception - Connection could not be allocated because:
Communications link failure
and flushing the connection pool gave:
com.sun.enterprise.admin.cli.CommandException: remote failure: Failed to flush connection pool ...
However I can connect to the database from a terminal. Besides I have the same app working on my local machine with identical connection pool settings.
Any one has an idea on whats going on or how to solve the trouble?
Such problem could be if you have mysql server & glassfish server on the same host, and in mysql configuration you have option bind to some public address (for example 192.168.0.1 of eth0 interface) that normally successfully working with simple jdbc/jpa using user#localhost, but they don`t in a case of glassfish JTA, instead to bind to some of local address you getting link failure. As rule you could not bind to any local (localhost/127.0.0.1) addresses of such mysql host if public address presented.
Example:
my.cnf
bind-address = 127.0.0.1
bind-address = 192.168.0.1
127.0.0.1 - assign to lo interface
192.168.0.1 - assign to eth0 interface
It is glassfish-mysql bug.
Currently in order to use JTA, you should not bind mysql to such address. (remove "bind-address=192.168.0.1" from my.cnf). Or use user#192.168.0.1 what is less secure.
Besides I have the same app working on my local machine with identical connection pool settings.
Are you connecting to the same database? If yes, maybe check that you're using the same JDBC driver.
In my case I set :
URL : jdbc:mysql://10.81.35.66:3306/testDB
and
url : jdbc:mysql://10.81.31.76:3306/vectordb
both When setting values while creating connection pool in additional property part
on glass fish admin console .