Cloud SQL database seems to always shutdown even with 'Always On' activation - java

I am benchmarking Google Cloud SQL. I started with the default "On Demand" activation policy since I work on a test platform with very few hits to save a few $.
It takes about 20-30s to connect for the very first queries (which I think was caused by the database to be started). After that, performance is great.
Now I switched to "Always ON" activation policy. I was expecting to have the exact same response times on the very first requests on my website. BUT: just like the "On Demand" policy, it takes about 30s to reconnect to the database. The time is spent in the connection pool trying to reconnect to the database so I am sure it is Cloud SQL time.
I suspect the "Always ON" policy to do absolutely nothing (except maybe cost more $? I haven't checked yet) and I got the feeling that the database continues to be shutdown. Maybe it changes the timeout policy slightly?
I found out this thread :
First connect from Prestashop to Google Cloud SQL always fails
So apparently there are still timeouts, but we can change it depending on the billing plan?
This is very unclear to me.
So here are my questions:
What is the timeout of a SQL instance for "Always ON" policy with "Per Use" billing?
What is the timeout of a SQL instance for "Always ON" policy with "Package" billing?
Is there a way I can manually set my own timeouts? After all, I am the one who pays... If I want my instance to keep running, that is my problem.
EDIT
I am sure it is a connection problem because I previously had a 3 seconds timeout on web requests. With this timeout set, all my requests threw the following exception :
Caused by: java.sql.SQLException: Cannot get a connection, general error
at org.apache.tomcat.dbcp.dbcp2.PoolingDataSource.getConnection(PoolingDataSource.java:130)
at org.apache.tomcat.dbcp.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1412)
at org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource.getConnection(AbstractRoutingDataSource.java:148)
at org.hibernate.ejb.connection.InjectedDataSourceConnectionProvider.getConnection(InjectedDataSourceConnectionProvider.java:71)
at org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:446)
... 27 more
Caused by: java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2052)
at org.apache.tomcat.dbcp.pool2.impl.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:582)
at org.apache.tomcat.dbcp.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:439)
at org.apache.tomcat.dbcp.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:360)
at org.apache.tomcat.dbcp.dbcp2.PoolingDataSource.getConnection(PoolingDataSource.java:118)
... 31 more

With the help of Cloud SQL support, the problem apparently came from my GCE server "forgetting" tcp connections after 10 minutes. Since it was a test server, the connections to the Cloud SQL instance from there were long-lived unused connection.
So to workaround this, I used the tcp keepalive, as advised by Google:
sudo bash -c 'echo 60 > /proc/sys/net/ipv4/tcp_keepalive_time'
Information about this configuration could be found here https://cloud.google.com/sql/docs/gce-access (section 6).
Don't forget to restart any application that makes connection to the Cloud SQL instance after setting this setting.

Related

JdbcSQLNonTransientConnectionException: Database may be already in use: "Waited for database closing longer than 1 minute"

We are using H2 started as database server process and listening on standard TCP/IP port 9092.
Our application is deployed in a Tomcat using connection pooling. We do a purge during idle time which at the end results in closing all connections to H2. From time to time we observe errors when the application tries to open the connection to H2 again:
SCHEDULERSERVICE schedule: Exception: Database may be already in use: "Waited for database closing longer than 1 minute". Possible solutions: close all other connection(s); use the server mode [90020-199]
org.h2.jdbc.JdbcSQLNonTransientConnectionException: Database may be already in use: "Waited for database closing longer than 1 minute". Possible solutions: close all other connection(s); use the server mode [90020-199]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:617)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:427)
at org.h2.message.DbException.get(DbException.java:205)
at org.h2.message.DbException.get(DbException.java:181)
at org.h2.engine.Engine.openSession(Engine.java:209)
at org.h2.engine.Engine.createSessionAndValidate(Engine.java:178)
at org.h2.engine.Engine.createSession(Engine.java:161)
at org.h2.server.TcpServerThread.run(TcpServerThread.java:160)
at java.lang.Thread.run(Thread.java:748)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:617)
at org.h2.engine.SessionRemote.done(SessionRemote.java:607)
at org.h2.engine.SessionRemote.initTransfer(SessionRemote.java:143)
at org.h2.engine.SessionRemote.connectServer(SessionRemote.java:431)
at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:317)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:169)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:148)
at org.h2.Driver.connect(Driver.java:69)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
The problem occurs when the Tomcat connection pool closes all idle connection (unused) and one connection still in use is closed afterwards.
The next attempt to open a new connection fails, a retry is successfully after some wait time.
Under which circumstances does this exception happen?
What does the exception mean?
Are there any recommendations to follow to avoid the problem?
It sounds to me that H2 does a database close after the last connection has been closed.
When does the database close occure?
How can database closures been controlled?
Thx in advance
Thorsten
Embedded database in web applications needs careful handling of its lifecycle.
You can add a javax.servlet.ServletContextListener implementation (marked with #WebListener annotation or included into web.xml) and add explicit database shutdown to its contextDestroyed() methods.
You can force database shutdown here with connection.createStatement().execute("SHUTDOWN"). If your application needs to write something to database during unload, it should do it before that command.
Without the explicit shutdown H2 closes the database when all connections are closed, if some other behavior wasn't configured explicitly (with parameters in JDBC URL, for example). For example, DB_CLOSE_DELAY sets the additional delay, maybe your application uses that setting and therefore H2 doesn't close the database immediately, or application doesn't close all connections immediately.
Anyway, when you're trying to update the web application of the fly, Tomcat tries to initialize the new version before its old version is unloaded. If H2 is in classpath of the web application itself, the new version will be unable to connect to the database during short period of time when the new version is already online but the old version isn't unloaded yet.
If you don't like it, you can run the standalone H2 Server process and use remote connections to it in your web applications.
Another option is to move H2 to the classpath of Tomcat itself and configure the connection pool as resource in the server.xml, in that case it shouldn't be affected by the lifecycle of your applications.
In both these cases you shouldn't use the SHUTDOWN command.
UPDATED
With client-server connections to a remote server such exception means that server decided to close the database because there are no active connection. This operation can't be interrupted and reverted in the middle. On attempt to open a new connection to the same database during this process it waits at most for 1 minute for completion of this process to re-open the database again. This timeout is not configurable.
There are two possible solutions.
DB_CLOSE_DELAY setting can be used with some large value in seconds. When all connections are closed, database will stay online for the specified number of seconds. -1 also can be used to set an infinite timeout.
You can try to speed up the shutdown process, but you have to figure out what takes so much time by yourself. The file compaction procedure is limited to 200 milliseconds by default, it may take a longer time, but I think it shouldn't be that long. Maybe you have a lot of temporary objects or uncommitted data. Maybe you have a very high fragmentation of database file. It's hard to say what's going wrong without further investigation.

PostgreSQL JDBC Connection issue

We have PostgreSQL 9.6 instance at a ubuntu 18.04 machine. When we restart java services deployed in a Kubernetes cluster then already existing idle connection didn't get remove and service create new connections on each restart. Due to this, we have reached a connection limit so many times and we have to terminate connection manually every time. Same service versions are deployed on other instances but we are not getting this scenario on other servers.
I have some questions regarding this
Can it be a PostgreSQL configuration issue? However, i didn't find any timeout-related setting difference in 2 instances (1 is working fine and another isnt)
If this is a java service issue then what should I check?
If its neither a PostgreSQL issue not a java issue then what should i look into?
If the client process dies without closing the database connection properly, it takes a while (2 hours by default) for the server to notice that the connection is dead.
The mechanism for that is provided by TCP and is called keepalive: after a certain idle time, the operating system starts sending keepalive packets. After a certain number of such packets without response, the TCP connection is closed, and the database backend process will die.
To make PostgreSQL detect dead connections faster, set the tcp_keepalives_idle parameter in postgresql.conf to less than 7200 seconds.

Getting "Write attempt on defunct connection" Error From Datastax Cassandra Java Driver

I have a web service application using Cassandra 2.0 and Datastax java driver 2.0.2. I sometimes get the stacktrace below when trying to write to/read from database, especially if the application has been sitting there for a while (like overnight). This error usually goes away when I retry, however, sometimes it persists and I have to restart the web app to get rid of the error.
I wonder if this is some sort of "stale connection" issue. However, the Datastax java driver documentation indicates it is supposed to keep the connection alive.
I did a google search on the error message and only two (!) hits were given by google. They are related. This is the answer in one of the google result:
Sylvain Lebresne Apr 2 You're running into
https://datastax-oss.atlassian.net/browse/JAVA-250. We'll fix it soon
hopefully (I have some half-finished patch that I need to finish), but
currently, if you restart a whole cluster without doing queries during
the restat, it can sometimes happen that you'll get this before the
cluster properly reconnect. In the meantime and as a workaround, you
can always make sure to run a few trivial queries while you're doing
the cluster restart to avoid it.
However this does not look like my scenario because we are not restarting the cluster at all. I wonder if anyone has some insights about this error?
Stacktrace:
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: ec2-54-197-xxx-xxx.compute-1.amazonaws.com/54.197.xxx.xxx:9042 (com.datastax.driver.core.ConnectionException: [ec2-54-197-xxx-xxx.compute-1.amazonaws.com/54.197.xxx.xxx:9042] Write attempt on defunct connection))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:65)
at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:256)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:172)
at com.datastax.driver.core.SessionManager.execute(SessionManager.java:92)
I have what I believe is the exact same issue (Write attempt on defunct connection) on my development machine intermittently.
It seems to happen when my dev machine goes to sleep while the server is up. Obviously there's no power management in the AWS cluster you're running, but it gives you a hint - the key is that something is breaking your control connection or intermittently preventing network connectivity between your hosts.
You should see the reconnection thread in your logs:
21:34:51.616 [Reconnection-1] ERROR c.d.driver.core.ControlConnection - [Control connection] Cannot connect to any host, scheduling retry in 2000 milliseconds
The next request after this will always succeed in my experience.
TL; DR - check for networking issues or any intermittent shutdown of servers that could break the control connection. The driver should do a better job of re-establishing broken control connections, sounds like they're working on it for JAVA-250

How to fix error: [BEA][SQLServer JDBC Driver]No more data available to read

My java application does use DB Connection pooling. One of the functionality started failing today with this error:
[BEA][SQLServer JDBC Driver]No more data available to read
This doesn't occur daily. Once I restart my application server things look fine for some days and this error comes back again.
Anyone encountered this error? Reasons might vary, but I would like to know those various reasons to mitigate my issue.
Is it possible that the database or network connection has briefly had an outage? You might expect any currently open result sets then to become invalid with resulting errors.
I've never seen this particular error, but then I don't work with BEA or SQL Server, but a quick google does show other folks suggesting such a cause.
When you're using a connection pool, if you do get such a glitch, then all connections in teh pool become "stale" or invalid. My application server (WebSphere) has the option to discard the entire connection pool after particular errors are detected. The result then is that one unlucky request sees the error, but then subsequent requests get a new connection and recover. If you don't discard the whole pool then you get a failure as each stale connection is used and discarded.
I suggest you investigate to see a). whether your app server has such a capability b). how you application responds if the database is bounced, if this replicates the error then maybe you've found the cause.

com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:Communications link failure [duplicate]

This question already has answers here:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
(51 answers)
Closed 6 years ago.
My program that connects to a MySQL database was working fine. Then, without changing any code used to set up the connection, I get this exception:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
What happened?
The code used to get the connection:
private static Connection getDBConnection() throws SQLException, InstantiationException, IllegalAccessException, ClassNotFoundException {
String username = "user";
String password = "pass";
String url = "jdbc:mysql://www.domain.com:3306/dbName?connectTimeout=3000";
Class.forName("com.mysql.jdbc.Driver");
Connection conn = DriverManager.getConnection(url, username, password);
return conn;
}
This is a wrapped exception and not really interesting. It is the root cause of the exception which actually tells us something about the root cause. Please look a bit further in the stacktrace. The chance is big that you'll then face a SQLException: Connection refused or SQLException: Connection timed out.
If this is true in your case as well, then all the possible causes are:
IP address or hostname in JDBC URL is wrong.
Hostname in JDBC URL is not recognized by local DNS server.
Port number is missing or wrong in JDBC URL.
DB server is down.
DB server doesn't accept TCP/IP connections.
Something in between Java and DB is blocking connections, e.g. a firewall or proxy.
To solve the one or the either, follow the following advices:
Verify and test them with ping.
Refresh DNS or use IP address in JDBC URL instead.
Verify it based on my.cnf of MySQL DB.
Start it.
Verify if mysqld is started without the --skip-networking option.
Disable firewall and/or configure firewall/proxy to allow/forward the port.
By the way (and unrelated to the actual problem), you don't necessarily need to load the JDBC driver on every getConnection() call. Just only once during startup is enough.
check your wait timeout set on the DB server.
Some times it defaults to 10 seconds. This looses the connection in 10 seconds.
mysql> show global variables like '%time%' ;
update it make it something like 28800
mysql> SET GLOBAL wait_timeout = 28800;
I've been having this issue also for about 8-9 days.
Here's some background: I'm developing a simple Java application that runs in bash.
Details:
Spring 2.5.6
Hibernate3.2.3.ga
With maven.
(The base of the project is from mkyong.com , the spring tutorial without anotations )
MySQL version:
[jvazquez#archbox ~]$ mysql --version
mysql Ver 14.14 Distrib 5.5.9, for Linux (i686) using readline 5.1
Linux archbox 2.6.37-ARCH #1 SMP PREEMPT Fri Feb 18 16:58:42 UTC 2011 i686 Intel(R) Core(TM)2 Quad CPU Q8200 # 2.33GHz GenuineIntel GNU/Linux
The application works fine in Arch Linux, Mac OS X 10.6, and FreeBSD 7.2.
When I moved the jar file to another arch linux in a different host, using the same mysql, a similar my.cnf, and the similar kernel version, the connection died and obtained the same error as the original poster:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
Communications link failure
I tried every possible combination for this that I found on so and the forums (http://forums.mysql.com/read.php?39,180347,180347#msg-180347 for example, which is closed now and I can't post .. ), specifically:
Triple check that I wasn't using skip
networking. (verified with ps aux
and the my.cnf)
Tried enable log_warnings=1 in the my.cnf but obviously, I wasn't hitting the
server so I didn't saw anything while using the app
SHOW ENGINE innodb STATUS didn't show anything at all; during the tests I could connect via shell, and php also connected to the mysql server
/etc/hosts has localhost 127.0.0.1
Tried the jdbc properties using localhost and 127.0.0.1 with no results
Tried adding c3p0 and changed the max_wait
Max connections in the my.cnf was changed to 900 , 2000 and still nothing my.cnf
Added wait_timeout = 60 my.cnf
Added net_wait_timeout = 360 my.cnf
Added the destroy-method="close" spring.xml
As it was pointed out (if you look up for the same exception , you will find several so threads about the issue Reproduce com.mysql.jdbc.exceptions.jdbc4.CommunicationsException with a setup of Spring, hibernate and C3P0
for example ).
If you are using tomcat, please check the security exception (again, it is on SO, you will find it )
Check that you can resolve that url that you are using
Try adding c3p0.
Verify that there isn't a firewall rejecting your connections
Finally , if you are using GNU/Linux ( ARch linux for example and you indeed obtain this exception )
Try
MySQL Forums :: JDBC and Java :: EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost
If the link get's removed, just add mysqld:ALL to /etc/hosts.allow
I know that is a bit extense, but it may help anybody using GNU/Linux and having this exception and this thread seemed the best place to post my research.
Hope it helps
I got the same error
but then I figured it out its because the Mysql server is not running at that time.
So to change the status of the server
Go to Task Manager
Go to Services
then search for your Mysql server(eg:for my case its MYSQL56)
then you will see under the status column it says its not running
by right clicking and select start
Hope this will help.
We have a piece of software (webapp with Tomcat) using Apache commons connection pooling, and worked great for years. In the last month I had to update the libraries due to an old bug we were encountering. The bug had been fixed in a recent version.
Shortly after deploying this, we started getting exactly these messages. Out of the thousands of connections we'd get a day, a handful (under 10, usually) would get this error message. There was no real pattern, except they would sometimes cluster in little groups of 2 to 5.
I changed the options to on the pool to validate the connection every time one is taken from or put back in the pool (if one is found bad, a new one is generated instead) and the problem went away.
Have you updated your MySQL jar lately? It seems like there may be a new setting that didn't used to be there in our (admittedly very old) jar.
I agree with BalusC to try some other options on your config, such as those you're passing to MySQL (in addition to the connection timeout).
If this failure is transient like mine was, instead of permanent, then you could use a simple try/catch and a loop to keep trying until things succeed or use a connection pool to handle that detail for you.
Other random idea: I don't know what happens why you try to use a closed connection (which exception you get). Could you be accidentally closing the connection somewhere?
Ensure skip-networking is commented out in my.cnf/my.ini
As BalusC mentioned, it would be very useful to post the full stacktrace (always post a full stacktrace, it is useless and frustrating to have only the first lines of a stacktrace).
Anyway, you mentioned that your code was working fine and that this problem started suddenly to occur without any code change so I'm wondering if this could be related to you other question Problem with not closing db connection while debugging? Actually, if this problem started while debugging, then I think it is (you ran out of connections). In that case, restart you database server (and follow the suggestions of the other question to avoid this situation).
I encountered same problem. I am using spring & dbcp & mysql 5.5But If I change localhost to 192.168.1.110 then everything works. What make things more weird is mysql -h localhost just works fine.
update: Finally found a solution. Changing bindaddress to localhost or 127.0.0.1 in my.conf will fix the problem.
In my case, the local loopback interface wasn't started, so "localhost" couldn't be resolved.
You can check this by running "ifconfig" and you should see an interface called "lo". If it is not up, you can activate it by running "ifup lo" or "ifconfig lo up".
In my case, the mysql.com downloaded Connector/J 5.1.29 .jar had this error whereas the 5.1.29 .jar downloaded from the MvnRepository did not.
This happened when building a Google appengine application in Android Studio (gradle, Windows x64), communicating to a Linux MySQL server on the local network/local VM.
I see you are connecting to a remote host. Now the question is what type of a network are you using to connect to the internet?
WINDOWS
If it's a mobile broadband device then get your machines IP address and add it to your hosting server so that your host server can allow connections coming from your machine.[your host might have turned this off due to security reasons].
Note that every time you use a different network device your IP changes.
If you are using a LAN then set a static IP address on your machine then add it to your host.
I hope this helps!! :)
I got the communications failure error when using a java.sql.PreparedStatement with a specific statement.
This was running against MySQL 5.6, Tomcat 7.0.29 and JDK 1.7.0_67 on a Windows 7 x64 machine.
The cause turned out to be setting an integer to a string parameter and a string to an integer parameter then trying to perform executeQuery on the prepared statement. After I corrected the order of parameter setting the statement performed correctly.
This had nothing to do with network issues as the wording of the error message suggested.
The escential problem is that Mysql JDBC pool connections is not used, then the Timeout from Mysql, close the Connections. You need change the pool Parameters to get restart connection when the connection has failures, on this way:
Connection Validation: Required (Check)
Validation Method: autocommit
You can change the Validation Method if you cannot get it works!
If you use WAMP, make sure it is online. What I did was, first turned my firewall off, then it worked, so after that I allowed connection for all local ports, specially port 80. Than I got rid of this problem. For me it was the Firewall who was blocking the connection.
I had the same problem and I used most of the params (autoreconnect etc..), but didn't try the (test_on_idle, or test_on_connect) , I am going to do them next.
However, I had this hack that got me through this:
I have a cron job called Healthcheck, It wakes up every 10 mins and makes a REST API call to the server. The web / app server picks this up, connects to the db, makes a small change and comes back with a 'yes all quiet on western front' or 'shitshappening'. When the latter, it sends a pager / email to the right people.
It has the side effect of always keeping the db connection pool fresh. So long as this cron is running, I don't have the db connection timeout issues. otherwise, they crop up.

Categories